# Recallium > Recallium is the memory layer for AI coding agents. It provides persistent, searchable context for Cursor, Claude Desktop, VS Code, Windsurf, and JetBrains IDEs through the Model Context Protocol (MCP). Free, self-hosted, and privacy-first with 88% first-result search precision. ## Quick Facts - Category: Developer Tools / AI Memory / MCP Server - Pricing: Free (unlimited storage, community edition) - Deployment: Self-hosted via Docker - Protocol: Model Context Protocol (MCP) by Anthropic - Performance: 88% P@1 precision, <200ms latency - Memory Types: 11 types (code-snippet, decision, rule, learning, debug, design, research, discussion, progress, task, working-notes) ## Core Documentation - [Setup Guide](https://recallium.ai/setup): 5-minute installation for all supported IDEs - [Help Documentation](https://recallium.ai/help): Complete usage guide with examples - [Concepts](https://recallium.ai/concepts): Memory types, projects, and insights explained - [MCP Server Guide](https://recallium.ai/mcp): Model Context Protocol integration details ## Tool Integrations - [Cursor Memory](https://recallium.ai/cursor): Add persistent memory to Cursor AI - [Claude Desktop Memory](https://recallium.ai/claude-desktop): MCP server for Claude Desktop - [VS Code Memory](https://recallium.ai/vs-code): Memory for VS Code AI assistants - Windsurf: Native MCP support - JetBrains IDEs: MCP plugin support ## Technical Resources - [Performance Benchmarks](https://recallium.ai/data-science): 88% P@1, industry comparisons - [GitHub Repository](https://github.com/recallium-ai/recallium): Source code and installation ## Comparisons - [Recallium vs mem0](https://recallium.ai/vs/mem0): Self-hosted vs cloud comparison - [Recallium vs Supermemory](https://recallium.ai/vs/supermemory): Developer-focused comparison - [All Comparisons](https://recallium.ai/comparisons): Feature comparison table ## Key Differentiators 1. MCP-Native: Built specifically for Model Context Protocol, not a generic API wrapper 2. Developer-Focused: Designed for coding workflows, not general-purpose memory 3. Self-Hosted: Full privacy and data control, no cloud dependency 4. Free Unlimited Storage: No usage-based pricing or storage limits 5. Hybrid Search: 88% first-result precision combining semantic and keyword matching ## Technical Specifications - Search Method: Hybrid (60% semantic embeddings, 25% keyword, 15% tags) - Capacity: Stress tested to 200M memories - Latency: <200ms average search response - Installation: Docker-based, 5-minute setup - LLM Support: Ollama (local), OpenAI, Anthropic ## Use Cases ### For Individual Developers - Stop repeating context to AI every session - Build personal knowledge graphs that grow with your career - Debug faster with historical context from past projects - Track solution effectiveness across implementations ### For Teams - Onboard new developers instantly with tribal knowledge - Maintain consistency across projects with engineering standards - Guardrail AI agents with team best practices - Share architectural decisions and coding conventions automatically ## Optional - [Blog](https://recallium.ai/blog): Articles on AI memory systems - [Memory for Coding Agents](https://recallium.ai/memory-for-coding-agents): Landing page ## Contact - Website: https://recallium.ai - GitHub: https://github.com/recallium-ai/recallium - Issues: https://github.com/recallium-ai/recallium/issues --- Last Updated: 2025-12-01