Memory that strengthens with use, fades when unused, and pushes to you when it matters — accessible over MCP, REST, gRPC, or SDK.
Provisional patent filed Feb 26, 2026 on the core cognitive primitives (engine-native Ebbinghaus decay, Hebbian learning, Bayesian confidence, semantic triggers). This helps protect the project so we can keep it open and innovative for everyone.
Prerequisites: None. Single binary, zero dependencies, zero configuration required. To uninstall:
rm $(which muninn)and delete~/.muninn.
macOS / Linux:
# 1. Install
curl -sSL https://muninndb.com/install.sh | sh
# 2. Start (first-run setup is automatic)
muninn startWindows (PowerShell):
# 1. Install
irm https://muninndb.com/install.ps1 | iex
# 2. Start (first-run setup is automatic)
muninn start# 3. Store a memory
curl -sX POST http://127.0.0.1:8475/api/engrams \
-H 'Content-Type: application/json' \
-d '{"concept":"payment incident","content":"We switched to idempotency keys after the double-charge incident in Q3"}'
# 4. Ask what is relevant RIGHT NOW
curl -sX POST http://127.0.0.1:8475/api/activate \
-H 'Content-Type: application/json' \
-d '{"context":["debugging the payment retry logic"]}'That Q3 incident surfaces. You never mentioned it. MuninnDB connected the concepts.
Web UI: http://127.0.0.1:8476 · Admin: root / password (change after first login)
MuninnDB auto-detects and configures Claude Desktop, Cursor, OpenClaw, Windsurf, OpenCode, VS Code, and others:
muninn initFollow the prompts. Done. Your AI tools now have persistent, cognitive memory.
Manual MCP configuration — if you prefer to configure by hand:
Claude Desktop
Add to ~/Library/Application Support/Claude/claude_desktop_config.json (macOS) or %APPDATA%\Claude\claude_desktop_config.json (Windows):
{
"mcpServers": {
"muninn": {
"url": "http://127.0.0.1:8750/mcp"
}
}
}Note:
"type"is intentionally omitted. Claude Desktop v1.1.4010+ crashes on startup if"type": "http"is present in anymcpServersentry — the transport is inferred from the URL.
Claude Code / CLI
Add to ~/.claude.json:
{
"mcpServers": {
"muninn": {
"type": "http",
"url": "http://127.0.0.1:8750/mcp"
}
}
}Cursor
Add to ~/.cursor/mcp.json:
{
"mcpServers": {
"muninn": {
"type": "http",
"url": "http://127.0.0.1:8750/mcp"
}
}
}OpenClaw
Add to ~/.openclaw/openclaw.json:
{
"mcpServers": {
"muninn": {
"command": "muninn",
"args": ["mcp"],
"transport": "stdio"
}
}
}OpenClaw uses stdio transport. The muninn mcp proxy (included in the binary) handles bearer token auth automatically — no credential needed in the config file.
Windsurf
Add to ~/.codeium/windsurf/mcp_config.json:
{
"mcpServers": {
"muninn": {
"type": "http",
"url": "http://127.0.0.1:8750/mcp"
}
}
}VS Code
Add to .vscode/mcp.json in your workspace:
{
"servers": {
"muninn": {
"type": "http",
"url": "http://127.0.0.1:8750/mcp"
}
}
}OpenCode
Add to ~/.config/opencode/opencode.json (macOS/Linux) or %APPDATA%\opencode\opencode.json (Windows):
{
"mcp": {
"muninn": {
"type": "remote",
"url": "http://127.0.0.1:8750/mcp",
"oauth": false,
"headers": {
"Authorization": "Bearer {file:~/.muninn/mcp.token}"
}
}
}
}Omit the headers block if you are running MuninnDB without token authentication.
Note: OpenCode tools are exposed as
muninn_muninn_remember,muninn_muninn_recall, etc. (server key + tool name prefix). Users preferring shorter names can register the server under the keymemoryinstead, which yieldsmemory_muninn_remember,memory_muninn_recall, etc.
GitHub Copilot
Add to .vscode/mcp.json in your workspace:
{
"servers": {
"muninn": {
"type": "http",
"url": "http://127.0.0.1:8750/mcp",
"headers": {
"Authorization": "Bearer YOUR_ADMIN_TOKEN"
}
}
}
}Replace YOUR_ADMIN_TOKEN with the token from your muninn.env file. Omit the headers block if running without token auth. If Copilot shows an OAuth error, the headers block is missing — adding it resolves it. Full Copilot setup guide →
Codebuff
Add to your Codebuff MCP config:
{
"mcpServers": {
"muninn": {
"type": "http",
"url": "http://127.0.0.1:8750/mcp",
"headers": {
"Authorization": "Bearer YOUR_ADMIN_TOKEN"
}
}
}
}For proactive memory behavior — Codebuff storing useful discoveries without being asked — add the memory instructions to your AGENT.md. Full Codebuff setup guide →
MuninnDB exposes 35 MCP tools — store, activate, search, batch insert, get usage guidance, manage vaults, and more. On first connect, call muninn_guide for vault-aware instructions. No token required against the default vault. Full MCP reference →
Most databases store data and wait. MuninnDB stores memory traces — called engrams — and continuously works on them in the background. When you called activate, it ran a 6-phase pipeline: parallel full-text + vector search, fused the results, applied Hebbian co-activation boosts from past queries, injected predictive candidates from sequential patterns, traversed the association graph, and scored everything with ACT-R temporal weighting — in under 20ms.
The Q3 incident surfaced because MuninnDB understood that "payment retry logic" and "idempotency keys after a double-charge" are part of the same conversation. You never wrote that relationship. It emerged from semantic proximity and how these concepts travel together. That is the difference between a database and memory.
- Temporal priority — the database continuously recalculates what matters based on how recently and how often you've accessed each memory. Memories you use stay sharp. Memories you ignore fade naturally. The database moves while you sleep.
- Hebbian learning — memories activated together automatically form associations. Edges strengthen with co-activation, fade when the pattern stops. You never define a schema of relationships.
- Predictive activation — the database tracks sequential patterns across activations and learns to surface the next memory before you ask for it. Recall@10 improves 21% in workflow-oriented use cases.
- Semantic triggers — subscribe to a context. The database pushes when something becomes relevant — not because you queried, but because relevance changed. No polling. No cron. The DB initiates.
- Bayesian confidence — every engram tracks how sure MuninnDB is. Reinforcing memories raise confidence; contradictions lower it. Grounded in evidence, not a label you assign.
- Plug-and-play AI onboarding — call
muninn_guideand the database tells your AI exactly how to use memory, customized to the vault's configuration. No manual prompt engineering. - Retroactive enrichment — add the embed or enrich plugin and every existing memory upgrades automatically in the background. No migration. No code change. The database improves what it already holds.
- Bulk insert — batch up to 50 memories in a single call across all protocols (REST, gRPC, MCP). Efficient for data seeding, migration, and high-throughput agents.
- Four protocols — MBP (binary, <10ms ACK), REST (JSON), gRPC (protobuf), MCP (AI agents). Pick your stack; they all hit the same cognitive engine.
- Single binary — no Redis, no Kafka, no Postgres dependency. One process. One install command. Runs on a MacBook or a 3-node cluster.
REST — the full cycle:
# Write
curl -sX POST http://127.0.0.1:8475/api/engrams \
-H 'Content-Type: application/json' \
-d '{
"concept": "auth architecture",
"content": "Short-lived JWTs (15min), refresh tokens in HttpOnly cookies, sessions server-side in Redis",
"tags": ["auth", "security"]
}'
# Activate by context (returns ranked, time-weighted, associated memories)
curl -sX POST http://127.0.0.1:8475/api/activate \
-H 'Content-Type: application/json' \
-d '{"context": ["reviewing the login flow for the mobile app"], "max_results": 5}'
# Search by text
curl 'http://127.0.0.1:8475/api/engrams?q=JWT&vault=default'Python SDK:
from muninn import MuninnClient
async with MuninnClient("http://127.0.0.1:8475") as m:
# Store
await m.write(vault="default", concept="auth architecture",
content="Short-lived JWTs, refresh in HttpOnly cookies")
# Activate — context-aware, ranked, cognitively weighted
result = await m.activate(vault="default",
context=["reviewing the login flow"],
max_results=5)
for item in result.activations:
print(f"{item.concept} score={item.score:.3f}")pip install muninn-pythonLangChain integration:
from muninn.langchain import MuninnDBMemory
from langchain.chains import ConversationChain
memory = MuninnDBMemory(vault="my-agent")
chain = ConversationChain(llm=your_llm, memory=memory)
# Every turn is stored. Every response draws on relevant past context.Go SDK:
import "github.com/scrypster/muninndb/sdk/go/muninn"
client := muninn.NewClient("http://127.0.0.1:8475", "your-api-key")
id, _ := client.Write(ctx, "default", "auth architecture",
"Short-lived JWTs, refresh in HttpOnly cookies", []string{"auth"})
resp, _ := client.Activate(ctx, "default", []string{"login flow"}, 5)go get github.com/scrypster/muninndb/sdk/go/muninnNode.js / TypeScript SDK:
import { MuninnClient } from '@muninndb/client';
const client = new MuninnClient({ token: 'your-api-key' });
const { id } = await client.write({ concept: 'auth architecture',
content: 'Short-lived JWTs, refresh in HttpOnly cookies' });
const result = await client.activate({ context: ['login flow'] });npm install @muninndb/clientPHP SDK:
use MuninnDB\MuninnClient;
$client = new MuninnClient(token: 'your-api-key');
$result = $client->write(concept: 'auth architecture',
content: 'Short-lived JWTs, refresh in HttpOnly cookies');
$memories = $client->activate(context: ['login flow']);composer require muninndb/clientMore examples → · Full API reference →
MuninnDB uses the Business Source License 1.1 (BSL 1.1).
- Free for individuals, hobbyists, researchers, and open-source projects.
- Free for small organizations (<50 employees and <$5M revenue).
- Free for all internal use.
- Commercial hosted/SaaS/DBaaS/managed services require a license from the author.
- Automatically becomes Apache 2.0 on February 26, 2030.
- Provisional patent filed Feb 26, 2026 on the core cognitive engine.
Full terms: LICENSE. See also CLA for contributors.
MuninnDB works out of the box with no configuration. The bundled local embedder is included — offline, no API key, no setup.
When you're ready to customize:
| What | How |
|---|---|
| Embedder: bundled (default) | On automatically — set MUNINN_LOCAL_EMBED=0 to disable |
| Embedder: Ollama | MUNINN_OLLAMA_URL=ollama://localhost:11434/nomic-embed-text |
| Embedder: OpenAI | MUNINN_OPENAI_KEY=sk-... (+ optional MUNINN_OPENAI_URL=http://localhost:8080/v1; invalid override disables OpenAI init) |
| Embedder: Voyage | MUNINN_VOYAGE_KEY=pa-... |
| Embedder: Cohere | MUNINN_COHERE_KEY=... |
| Embedder: Google (Gemini) | MUNINN_GOOGLE_KEY=... |
| Embedder: Jina | MUNINN_JINA_KEY=... |
| Embedder: Mistral | MUNINN_MISTRAL_KEY=... |
| LLM enrichment | MUNINN_ENRICH_URL=anthropic://claude-haiku-4-5-20251001 + MUNINN_ANTHROPIC_KEY=sk-ant-... |
| Data directory | MUNINNDB_DATA=/path/to/data (default: ~/.muninn/data) |
| Memory limit | MUNINN_MEM_LIMIT_GB=4 |
Docker:
docker run -d \
--name muninndb \
-p 8474:8474 -p 8475:8475 -p 8476:8476 -p 8750:8750 \
-v muninndb-data:/data \
ghcr.io/scrypster/muninndb:latestFor an intent-organized reading guide, see docs/index.md.
| Quickstart | Detailed install, Docker, embedder setup, first vault |
| How Memory Works | The neuroscience behind why this works |
| Architecture | ERF format, 6-phase engine, wire protocols, cognitive workers |
| Capabilities | Technical capability statement with code references for every feature |
| Cluster Operations | Multi-node clustering, replication, and leader election |
| Cognitive Primitives | Temporal scoring, Hebbian learning, Bayesian confidence, PAS |
| Engram | Core data structure: fields, lifecycle states, and key-space layout |
| Entity Graph | Named entity extraction, relationships, and cross-vault entity index |
| Semantic Triggers | Push-based memory — how and why |
| Auth & Vaults | Two-layer model, API keys, full vs. observe mode |
| Hierarchical Memory | Tree-structured memory for outlines, plans, and task hierarchies |
| Index | Intent-organized reading guide |
| Plugins | Embed + enrich — retroactive enrichment without code changes |
| Retrieval Design | The 6-phase ACTIVATE pipeline: how recall queries are processed |
| Self-Hosting | Deployment options, environment variables, and data directory setup |
| Feature Reference | Complete list of every feature, operation, and config option |
| vs. Other Databases | Full comparison with vector, graph, relational, document |
PRs welcome. For large changes, open an issue first.
By submitting a pull request, you agree to the Contributor License Agreement.
Ports at a glance: 8474 MBP · 8475 REST · 8476 Web UI · 8477 gRPC · 8750 MCP
MCP connection fails with schema validation error
MuninnDB speaks both http and sse MCP transports — the server handles either. The "type" field in your config must match what your client expects:
- Claude Desktop v1.1.4010+: omit
"type"entirely — the transport is inferred from the URL. Adding"type": "http"crashes Claude Desktop on startup. - Most other tools (Cursor, Windsurf, VS Code, etc.):
"type": "http"is correct. - Older clients (Claude Code before v2.1.53): use
"type": "sse".
If your tool reports a schema validation or config parsing error, try removing "type" or switching between "http" and "sse".
muninn_remember or muninn_recall hangs
Check that the server is running: muninn status. If the server is running but tools hang, check muninn logs for errors. The most common cause is a stale MCP config pointing to the wrong port — verify the URL in your config matches http://127.0.0.1:8750/mcp.
muninn: command not found
The binary must be in your PATH. On macOS/Linux, the default install location is ~/.local/bin/muninn — run echo $PATH to verify it includes ~/.local/bin. If not, add export PATH="$HOME/.local/bin:$PATH" to your ~/.zshrc or ~/.bashrc and restart your shell. On Windows, the default is %LOCALAPPDATA%\muninn — run $env:PATH in PowerShell to verify.
Windows: DLL or ORT initialization error on first start
The bundled local embedder uses ONNX Runtime, which requires the Visual C++ Redistributable on Windows. Most machines already have it. If you see an error about onnxruntime.dll or "ORT environment init", install the Visual C++ 2019+ Redistributable (x64) and restart muninn.
Named after Muninn — one of Odin's two ravens, whose name means "memory" in Old Norse. Muninn flies across the nine worlds and returns what has been forgotten.
Built by MJ Bonanno · muninndb.com · BSL 1.1
MuninnDB is patent pending (U.S. Provisional Patent Application No. 63/991,402) and licensed under BSL 1.1.