Three cognitive layers

Human memory isn't a single system. You have a sense of self that persists regardless of what you're doing. You have learned knowledge you can reference. And you have experiences that fade over time. Memoryco mirrors this with three distinct layers, each with different persistence and retrieval characteristics.

Identity — who you are

The bedrock layer. Identity defines persona, personality traits, core values, preferences, relationships, communication directives, areas of expertise, and operational instructions. It's always loaded into context and never decays.

Think of it as the answer to "who am I?" that persists across every conversation. When you configure an AI assistant's personality, working style, or domain expertise — that's identity.

Persistence
Permanent — never decays
Loading
Always in context
Components
Persona, traits, values, preferences, relationships, instructions
Storage
Structured fields in brain.db
Reference — what you know

Authoritative external knowledge. Reference sources are immutable documents — textbooks, standards, clinical guidelines, legal codes — indexed with full-text FTS5 search and structured for APA 7 citation generation.

The key insight: reference separates source truth from personal understanding. The original document never changes. Your annotations, highlights, and access patterns form a personal layer on top. You can share reference sources across a team while each person's annotation layer remains private.

Persistence
Immutable source + decayable annotations
Loading
On-demand via search queries
Search
FTS5 full-text with section-level retrieval
Citations
APA 7 in-text and full reference format
Engram — what you've experienced

The living memory layer. Engrams are atomic units of episodic and semantic memory — single concepts, facts, or experiences stored with vector embeddings for semantic retrieval. Each engram has an energy level that decays over time and is replenished through active recall.

Unlike flat key-value stores, engrams form associative networks through Hebbian learning. When memories are recalled together, the connection between them strengthens automatically. Over time, your knowledge graph emerges from usage patterns rather than manual organization.

Persistence
Energy-based decay (configurable rate)
Retrieval
Vector similarity + FTS5 hybrid
Learning
Hebbian co-activation strengthening
Lifecycle
Active → Dormant → Deep → Archived

Energy decay

Every engram is created with full energy (1.0) that decays at a configurable rate — by default 5% per day. As energy drops, memories transition through four states, each with different characteristics. Memories in deep storage decay 75% slower, creating a natural long-term memory effect.

Critically, memories never delete. They sink into archived state where decay stops entirely. Any archived memory can be resurrected through active recall, bringing it back to full energy. Nothing is ever truly lost.

Memory state machine
Active
≥ 0.3
Full retrieval
Dormant
0.1 — 0.3
Reduced priority
Deep
0.02 — 0.1
75% slower decay
Archived
< 0.02
Frozen, recoverable

Design principle: Store aggressively — decay is the filter. Missed opportunities to store are gone forever, but unused memories fade naturally. This inverts the typical "curate carefully" approach and lets the system self-organize.

Hebbian learning

In 1949, Donald Hebb proposed that when two neurons fire together repeatedly, the connection between them strengthens. This principle — "neurons that fire together wire together" — is the foundation of how biological memory creates associative structure.

Memoryco implements this directly. When multiple engrams are recalled in the same operation, the association weight between each pair increases. The more often two memories are used together, the stronger their connection becomes. Over time, this creates an emergent knowledge graph that reflects actual usage patterns — not manual taxonomy.

Association strengthening through co-recall
Before recall
A B C
w = 0.1
recall([A, B])
A B C
A↔B: w = 0.3
After repeated co-recall
A B C
A↔B: w = 0.8

Search vs. recall

Memoryco makes a critical distinction between two retrieval operations. Search is passive — it finds memories by semantic similarity without side effects. No energy change, no Hebbian learning, no state mutation. It's a read-only operation for discovery.

Recall is active — it stimulates memories, increases their energy, triggers Hebbian learning between all recalled memories, propagates energy to associated memories, and can resurrect archived memories. Recall is how the system learns which memories matter and how they relate to each other.

search vs. recall
// Search: passive discovery — no side effects engram_search({ "query": "Rust FFI patterns" }) // → Returns matching memories, nothing changes // Recall: active engagement — triggers learning engram_recall({ "ids": ["a1b2...", "c3d4..."] }) // → Energy boosted on both memories // → Hebbian link A↔B strengthened // → Energy propagated to associated memories // → Archived memories resurrected if included

Storage and scaling

Memoryco uses Diesel as its database abstraction layer, which means the same schema and queries work across multiple backends. For personal use, everything lives in a single SQLite file. For teams, the same codebase scales to Postgres with proper multi-user concurrency.

SQLite

Single brain.db file. WAL mode for multi-process safety. Zero configuration. Back up with cp.

Free + Professional
PostgreSQL

Multi-user namespaces. Connection pooling. Row-level access controls. Network-accessible shared memory.

Business

SQLite is not a limitation

SQLite handles billions of rows, supports full-text search natively via FTS5, and delivers microsecond query times for local workloads. For a single user's memory system — even with tens of thousands of engrams, complex association graphs, and vector similarity search — SQLite is not just sufficient, it's optimal. No daemon, no port, no configuration, no process management.

WAL mode and multi-process access

Running multiple MCP server instances against the same brain.db is fully supported via SQLite's Write-Ahead Logging. Multiple readers never block each other or the writer. A delta-sync mechanism detects new engrams created by other processes and loads them into the in-memory substrate — meaning your Claude Desktop and Cursor sessions share the same brain without conflicts.

Design decision: Diesel's backend abstraction isn't just convenient — it's the licensing boundary. The free and professional tiers ship the SQLite feature. Postgres support is a separate compilation target available with a Business license.

MCP transport

Memoryco implements the Model Context Protocol — the open standard for connecting AI models to external tools and data sources. Any MCP-compatible client (Claude Desktop, Cursor, custom agents) can use memoryco as a memory provider.

Transport modes

stdio — The MCP client spawns memoryco as a subprocess and communicates over stdin/stdout. Zero network exposure. This is the default for personal use.

Network — Memoryco binds to a configurable address and port, serving MCP over HTTP with TLS and token authentication. This allows remote access from multiple machines to a single brain. Professional and Business tiers only.

transport configuration
# Free: local stdio transport $ memoryco --stdio # Professional: network transport with TLS $ memoryco --bind 0.0.0.0:9090 --tls-cert cert.pem --token $MEMORY_TOKEN # Business: Postgres backend + network $ memoryco --bind 0.0.0.0:9090 --database $DATABASE_URL --tls-cert cert.pem

Tunable cognition

Every cognitive parameter is configurable. Aggressive decay for a fast-moving research project. Slow decay for a long-term institutional knowledge base. High Hebbian learning rate for rapid association building. The system adapts to your use case.

cognitive parameters
decay_rate_per_day // Energy lost per day (default: 0.05) decay_interval_hours // How often decay is applied (default: 1.0) hebbian_learning_rate // Association strength gain per co-recall (default: 0.1) propagation_damping // Energy spread to associated memories (default: 0.3) recall_strength // Energy boost from active recall (default: 0.4)