claude-mem is a Claude Code plugin that solves the session memory problem: Claude Code forgets everything between sessions. The plugin captures session context during conversations, compresses and indexes it using the Agent SDK, and automatically injects the most relevant prior context into new sessions. With 49K GitHub stars, it's one of the most widely adopted Claude Code extensions, filling a gap that Anthropic has yet to address natively. It works transparently — users don't need to change their workflow, memory just accumulates.
claude-mem addresses one of the most common complaints about AI coding tools: context loss between sessions. Every time you start a new Claude Code session, you’re starting fresh — the agent has no memory of your project’s decisions, your coding preferences, or the debugging rabbit holes from last week. claude-mem intercepts session data, compresses it using the Anthropic Agent SDK, and automatically injects the most relevant prior context into each new session.
With 49K GitHub stars and +814 stars in a single day, claude-mem has become effectively the standard solution for this problem in the Claude Code ecosystem. Its adoption rate is itself a signal: Anthropic has not yet shipped native persistent memory, so the community built the fix.
Transparent memory accumulation: claude-mem runs as a Claude Code plugin, capturing context during sessions without requiring any change to how you use Claude Code. Memory builds up passively.
Agent SDK compression: Rather than dumping raw session transcripts into future contexts, claude-mem uses the Agent SDK to compress and summarize session data. This keeps injection payloads small and relevant.
Relevance-based retrieval: When starting a new session, claude-mem evaluates which prior context is most relevant to the current working directory, task description, or file focus, and injects only what matters.
claude-mem is most valuable for developers with long-running projects who use Claude Code regularly. Project-specific context (architecture decisions, known constraints, recent debugging outcomes) that previously evaporated between sessions is now preserved. Power users running Claude Code for hours a day see the most benefit — memory compounds quickly.
claude-mem fills a gap that Anthropic may eventually address natively. If Anthropic ships persistent memory in Claude Code proper, claude-mem would be superseded. Until then, it’s the practical solution. The plugin requires Claude Code as its runtime environment — it doesn’t work as a standalone agent.
Any developer who uses Claude Code regularly on complex projects and has felt the frustration of re-explaining context at the start of every session. Particularly valuable for long-running projects with established architectural decisions, team conventions, and accumulated debugging knowledge.
OpenAI's flagship AI assistant capable of conversation, analysis, coding, writing, and multimodal reasoning across text and images.
Anthropic's AI assistant known for nuanced reasoning, long-context understanding, safety-focused design, and exceptional writing quality.
Google's AI assistant with a 1M+ token context window, multimodal understanding, and deep integration into Google Workspace, Search, and Android.