High-performance temporal-associative memory store that mimics the brain's recall mechanism.
CueMap implements a Continuous Gradient Algorithm inspired by biological memory:
- Intersection (Context Filter): Triangulates relevant memories by overlapping cues
- Pattern Completion (Associative Recall): Automatically infers missing cues from co-occurrence history, enabling recall from partial inputs.
- Recency & Salience (Signal Dynamics): Balances fresh data with salient, high-signal events prioritized by the Amygdala-inspired salience module.
- Reinforcement (Hebbian Learning): Frequently accessed memories gain signal strength, staying "front of mind".
- Autonomous Consolidation: Periodically merges overlapping memories into summaries, mimicking systems consolidation.
pip install cuemapdocker run -p 8080:8080 cuemap/engine:latestfrom cuemap import CueMap
client = CueMap()
# Add a memory (auto-cue generation by default using internal Semantic Engine)
client.add("The server password is abc123")
# Recall by natural language (resolves via Lexicon)
results = client.recall("server credentials")
print(results[0].content)
# Output: "The server password is abc123"# Manual cues
client.add(
"Meeting with John at 3pm",
cues=["meeting", "john", "calendar"]
)
# Auto-cues (Semantic Engine)
client.add("The payments service is down due to a timeout")# Natural Language Search (Brain-Inspired)
results = client.recall(
"payments failure",
limit=10,
explain=True # See how the query was expanded
)
print(results[0].explain)
# Shows normalized cues, expanded synonyms, etc.
# Explicit Cue Search
results = client.recall(
cues=["meeting", "john"],
min_intersection=2
)Get verifiable context for LLMs with a strict token budget.
response = client.recall_grounded(
query="Why is the payment failing?",
token_budget=500
)
print(response["verified_context"])
# [VERIFIED CONTEXT] ...
print(response["proof"])
# Cryptographic proof of context retrievalExplore related concepts from the cue graph to expand a user's query.
response = client.context_expand("server hung 137", limit=5)
# {
# "query_cues": ["server", "hung", "137"],
# "expansions": [
# { "term": "out_of_memory", "score": 25.0, "co_occurrence_count": 12 },
# { "term": "SIGKILL", "score": 22.0, "co_occurrence_count": 8 }
# ]
# }Manage project snapshots in the cloud (S3, GCS, Azure).
# Upload current project snapshot
client.backup_upload("default")
# Download and restore snapshot
client.backup_download("default")
# List available backups
backups = client.backup_list()Ingest content from various sources directly.
# Ingest URL
client.ingest_url("https://example.com/docs")
# Ingest File (PDF, DOCX, etc.)
client.ingest_file("/path/to/document.pdf")
# Ingest Raw Content
client.ingest_content("Raw text content...", filename="notes.txt")Inspect and wire the brain's associations manually.
# Inspect a cue's relationships
data = client.lexicon_inspect("service:payment")
print(f"Synonyms: {data['outgoing']}")
print(f"Triggers: {data['incoming']}")
# Manually wire a token to a concept
client.lexicon_wire("stripe", "service:payment")
# Get synonyms via WordNet
synonyms = client.lexicon_synonyms("payment")Check the progress of background ingestion tasks.
status = client.jobs_status()
print(f"Ingested: {status['writes_completed']} / {status['writes_total']}")Disable specific brain modules for deterministic debugging.
results = client.recall(
"urgent issue",
disable_pattern_completion=True, # No associative inference
disable_salience_bias=True, # No emotional weighting
disable_systems_consolidation=True, # No gist summaries
disable_temporal_chunking=True # No episodic grouping
)from cuemap import AsyncCueMap
async with AsyncCueMap() as client:
await client.add("Note")
await client.recall(["note"])- Write Latency: ~2ms (O(1) complexity)
- Read Latency: ~3ms (P99, 1M memories)
MIT