Memoryco doesn't bolt memory onto an LLM. It implements a biologically-inspired cognitive system where memories have energy, form associations through co-activation, and naturally decay without use — just like yours do.
Human memory isn't a single system. You have a sense of self that persists regardless of what you're doing. You have learned knowledge you can reference. And you have experiences that fade over time. Memoryco mirrors this with three distinct layers, each with different persistence and retrieval characteristics.
The bedrock layer. Identity defines persona, personality traits, core values, preferences, relationships, communication directives, areas of expertise, and operational instructions. It's always loaded into context and never decays.
Think of it as the answer to "who am I?" that persists across every conversation. When you configure an AI assistant's personality, working style, or domain expertise — that's identity.
Authoritative external knowledge. Reference sources are immutable documents — textbooks, standards, clinical guidelines, legal codes — indexed with full-text FTS5 search and structured for APA 7 citation generation.
The key insight: reference separates source truth from personal understanding. The original document never changes. Your annotations, highlights, and access patterns form a personal layer on top. You can share reference sources across a team while each person's annotation layer remains private.
The living memory layer. Engrams are atomic units of episodic and semantic memory — single concepts, facts, or experiences stored with vector embeddings for semantic retrieval. Each engram has an energy level that decays over time and is replenished through active recall.
Unlike flat key-value stores, engrams form associative networks through Hebbian learning. When memories are recalled together, the connection between them strengthens automatically. Over time, your knowledge graph emerges from usage patterns rather than manual organization.
Every engram is created with full energy (1.0) that decays at a configurable rate — by default 5% per day. As energy drops, memories transition through four states, each with different characteristics. Memories in deep storage decay 75% slower, creating a natural long-term memory effect.
Critically, memories never delete. They sink into archived state where decay stops entirely. Any archived memory can be resurrected through active recall, bringing it back to full energy. Nothing is ever truly lost.
Design principle: Store aggressively — decay is the filter. Missed opportunities to store are gone forever, but unused memories fade naturally. This inverts the typical "curate carefully" approach and lets the system self-organize.
In 1949, Donald Hebb proposed that when two neurons fire together repeatedly, the connection between them strengthens. This principle — "neurons that fire together wire together" — is the foundation of how biological memory creates associative structure.
Memoryco implements this directly. When multiple engrams are recalled in the same operation, the association weight between each pair increases. The more often two memories are used together, the stronger their connection becomes. Over time, this creates an emergent knowledge graph that reflects actual usage patterns — not manual taxonomy.
Memoryco makes a critical distinction between two retrieval operations. Search is passive — it finds memories by semantic similarity without side effects. No energy change, no Hebbian learning, no state mutation. It's a read-only operation for discovery.
Recall is active — it stimulates memories, increases their energy, triggers Hebbian learning between all recalled memories, propagates energy to associated memories, and can resurrect archived memories. Recall is how the system learns which memories matter and how they relate to each other.
Memoryco uses Diesel as its database abstraction layer, which means the same schema and queries work across multiple backends. For personal use, everything lives in a single SQLite file. For teams, the same codebase scales to Postgres with proper multi-user concurrency.
Single brain.db file. WAL mode for multi-process safety. Zero configuration. Back up with cp.
Multi-user namespaces. Connection pooling. Row-level access controls. Network-accessible shared memory.
SQLite handles billions of rows, supports full-text search natively via FTS5, and delivers microsecond query times for local workloads. For a single user's memory system — even with tens of thousands of engrams, complex association graphs, and vector similarity search — SQLite is not just sufficient, it's optimal. No daemon, no port, no configuration, no process management.
Running multiple MCP server instances against the same brain.db is fully supported via SQLite's Write-Ahead Logging. Multiple readers never block each other or the writer. A delta-sync mechanism detects new engrams created by other processes and loads them into the in-memory substrate — meaning your Claude Desktop and Cursor sessions share the same brain without conflicts.
Design decision: Diesel's backend abstraction isn't just convenient — it's the licensing boundary. The free and professional tiers ship the SQLite feature. Postgres support is a separate compilation target available with a Business license.
Memoryco implements the Model Context Protocol — the open standard for connecting AI models to external tools and data sources. Any MCP-compatible client (Claude Desktop, Cursor, custom agents) can use memoryco as a memory provider.
stdio — The MCP client spawns memoryco as a subprocess and communicates over stdin/stdout. Zero network exposure. This is the default for personal use.
Network — Memoryco binds to a configurable address and port, serving MCP over HTTP with TLS and token authentication. This allows remote access from multiple machines to a single brain. Professional and Business tiers only.
Every cognitive parameter is configurable. Aggressive decay for a fast-moving research project. Slow decay for a long-term institutional knowledge base. High Hebbian learning rate for rapid association building. The system adapts to your use case.