Memoryco doesn't bolt a database onto an LLM. It implements a cognitive memory system where memories naturally strengthen through use, fade without it, and form connections that make your AI smarter over time.
Human memory isn't a single system. You have a sense of self that persists regardless of what you're doing. You have learned knowledge you can reference. And you have experiences that fade and evolve over time. Memoryco mirrors this with three distinct layers.
The bedrock layer. Identity defines persona, personality traits, core values, preferences, relationships, and expertise. It's always loaded into context and never fades — because who you are doesn't decay.
Think of it as the answer to "who am I?" that persists across every conversation. When you configure an AI assistant's personality, working style, or domain expertise — that's identity.
Authoritative external knowledge. Reference sources are immutable documents — textbooks, standards, clinical guidelines, legal codes — indexed with full-text search and structured for automatic citation generation.
The key insight: reference separates source truth from personal understanding. The original document never changes. Your access patterns and context form a personal layer on top. Share reference sources across a team while each person's experience with them remains private.
The living layer. Memories are atomic units of experience — facts, decisions, context, discoveries — stored with semantic embeddings so your AI finds memories by meaning, not just keywords.
Unlike flat storage, memories form associative networks through use. When memories are recalled together, the connection between them strengthens automatically. Over time, your AI's understanding emerges from natural conversation rather than manual organization.
Every memory starts at full strength. Without use, it gradually fades — moving from actively available through progressively deeper storage. Memories that matter stay strong because you naturally use them. Memories that don't quietly recede.
This is the opposite of how most systems work. Instead of an ever-growing pile where everything competes equally for attention, memoryco self-organizes. What you use stays relevant. What you don't gets out of the way.
Critically, memories never delete. They simply move into cold storage where fading stops entirely. Any archived memory can be brought back to full strength through active recall. Nothing is ever truly lost.
Design principle: Store aggressively — decay is the filter. Missed opportunities to store are gone forever, but unused memories fade naturally. This inverts the typical "curate carefully" approach and lets the system self-organize.
In biological memory, experiences that occur together become linked. Recall one, and related memories surface naturally. This is the foundation of how humans build understanding — not through categorization, but through association.
Memoryco implements this directly. When multiple memories are recalled together, the connection between them strengthens. The more often two memories are used in the same context, the stronger their link becomes. Over time, your AI develops an emergent understanding of how your knowledge fits together — without any manual organization.
Memoryco makes an important distinction between two ways to retrieve memories. Search is passive — it finds memories by meaning without changing anything. It's how your AI discovers relevant context.
Recall is active — it signals "I'm actually using this." Recalled memories get stronger, connections between co-recalled memories tighten, and archived memories can be brought back to life. Recall is how the system learns what matters and how things relate.
For personal use, your entire memory system lives in a single portable file. No servers, no configuration, no process management. For teams that need shared memory with proper multi-user access, the same cognitive model scales to a networked database backend.
Single portable file. Zero configuration. Back it up, move it between machines, own it completely.
Shared memory with multi-user access controls. Network-accessible from any client. Same cognitive model.
Running Claude Desktop and Cursor at the same time? Both can share the same memory safely. Memoryco handles concurrent access so your sessions stay in sync — what you learn in one conversation is immediately available in another.
Memoryco implements the Model Context Protocol — the open standard for connecting AI models to external tools and data sources. Any MCP-compatible client gets persistent memory with zero code changes.
For personal use, memoryco runs as a local process — no network exposure, no ports, no configuration beyond a single line in your client config. For teams, it can serve over the network with encryption and authentication.
Every cognitive behavior is configurable. Want aggressive decay for a fast-moving research project? Slow decay for a long-term institutional knowledge base? Rapid association building for dense learning sessions? Every parameter is a knob you can turn.
The defaults work well out of the box, but the system adapts to your use case — not the other way around.