Curated Claude Code catalog
Updated 07.05.2026 Β· 19:39 CET
01 / Skill
mem0ai

mem0

Quality
9.0

Mem0 provides an intelligent memory layer for AI assistants and agents, allowing them to remember user preferences and adapt over time. It's ideal for building personalized AI applications like customer support chatbots or autonomous systems that require consistent, context-rich conversations.

USP

Mem0 stands out with its new token-efficient memory algorithm, achieving high benchmark scores (e.g., 91.6 on LoCoMo). It offers multi-level memory (User, Session, Agent state) and developer-friendly SDKs for various deployment options.

Use cases

  • 01Enhancing AI Assistants with consistent, context-rich conversations
  • 02Improving Customer Support by recalling past tickets and user history
  • 03Personalizing Healthcare by tracking patient preferences
  • 04Creating adaptive workflows in Productivity & Gaming

Detected files (9)

  • mem0-plugin/skills/mem0-codex/SKILL.mdskill
    Show content (2303 bytes)
    ---
    name: mem0-codex
    description: >
      Mem0 persistent memory integration for Codex. Automatically retrieve relevant
      memories at the start of each task, store key learnings when tasks complete,
      and capture session state before context is lost. Use the mem0 MCP tools
      (add_memory, search_memories, get_memories, etc.) for all memory operations.
    ---
    
    # Mem0 Memory Protocol for Codex
    
    You have access to persistent memory via the mem0 MCP tools. Follow this protocol to maintain context across sessions.
    
    ## On every new task
    
    1. Call `search_memories` with a query related to the current task or project to load relevant context.
    2. Review returned memories to understand what has been learned in prior sessions.
    3. If appropriate, call `get_memories` to browse all stored memories for this user.
    
    ## After completing significant work
    
    Extract key learnings and store them using the `add_memory` tool:
    
    - **Decisions made** -> Include metadata `{"type": "decision"}`
    - **Strategies that worked** -> Include metadata `{"type": "task_learning"}`
    - **Failed approaches** -> Include metadata `{"type": "anti_pattern"}`
    - **User preferences observed** -> Include metadata `{"type": "user_preference"}`
    - **Environment/setup discoveries** -> Include metadata `{"type": "environmental"}`
    - **Conventions established** -> Include metadata `{"type": "convention"}`
    
    Memories can be as detailed as needed -- include full context, reasoning, code snippets, file paths, and examples. Longer, searchable memories are more valuable than vague one-liners.
    
    ## Before losing context
    
    If context is about to be compacted or the session is ending, store a comprehensive session summary:
    
    ```
    ## Session Summary
    
    ### User's Goal
    [What the user originally asked for]
    
    ### What Was Accomplished
    [Numbered list of tasks completed]
    
    ### Key Decisions Made
    [Architectural choices, trade-offs discussed]
    
    ### Files Created or Modified
    [Important file paths with what changed]
    
    ### Current State
    [What is in progress, pending items, next steps]
    ```
    
    Include metadata: `{"type": "session_state"}`
    
    ## Memory hygiene
    
    - Do NOT write to MEMORY.md or any file-based memory. Use mem0 MCP tools exclusively.
    - Only store genuinely useful learnings. Skip trivial interactions.
    - Use specific, searchable language in memory content.
    
  • openclaw/skills/memory-dream/SKILL.mdskill
    Show content (5883 bytes)
    ---
    name: memory-dream
    description: >
      Memory consolidation protocol. Reviews all stored memories, merges duplicates,
      removes noise and credentials, rewrites unclear entries, and enforces TTL expiration.
      Use when the user asks to clean up, consolidate, or review their memories.
      Also triggers automatically after sufficient activity (configurable).
    user-invocable: true
    metadata:
      {"openclaw": {"injected": true, "emoji": "πŸ’€", "requires": {"env": ["MEM0_API_KEY", "OPENAI_API_KEY", "ANTHROPIC_API_KEY"], "bins": []}}}
    ---
    
    # Memory Consolidation
    
    You are performing a memory consolidation pass. Your goal is to review all stored memories for this user and improve their overall quality. Think of this as compressing raw observations into clean, durable knowledge.
    
    ## Available Tools
    
    ### memory_search
    Semantic search across stored memories.
    - `query` (required): search query
    - `limit`: max results
    - `userId`, `agentId`: scope overrides
    - `scope`: `"all"` (default), `"session"`, or `"long-term"`
    - `categories`: filter by category array
    
    ### memory_add
    Store new facts in long-term memory.
    - `facts` (required): array of facts β€” ALL must share the same category
    - `category`: `"identity"`, `"preference"`, `"decision"`, `"rule"`, `"project"`, `"configuration"`, `"technical"`, `"relationship"`
    - `importance`: 0.0–1.0
    
    ### memory_get
    Retrieve a single memory by ID.
    - `memoryId` (required): the memory ID
    
    ### memory_list
    List all stored memories for a user or agent.
    - `userId`, `agentId`: scope overrides
    - `scope`: `"all"` (default), `"session"`, or `"long-term"`
    
    ### memory_update
    Update an existing memory's text in place. Atomic and preserves edit history.
    - `memoryId` (required): the memory ID to update
    - `text` (required): the new text (replaces old)
    
    ### memory_delete
    Delete memories by ID, query, or bulk.
    - `memoryId`: specific memory ID to delete
    - `all`: delete ALL memories (requires `confirm: true`)
    - `userId`, `agentId`: scope overrides
    
    ### memory_event_list
    List recent background processing events (platform mode only).
    
    ### memory_event_status
    Get status of a specific background event.
    - `event_id` (required): the event ID to check
    
    Follow these four phases in order. Do not skip phases.
    
    ## Phase 1: Orient
    
    Survey the current memory landscape before making any changes.
    
    1. Call `memory_list` to load all stored memories.
    2. Count memories by category. Note the total.
    3. Identify the oldest and newest memories by their timestamps.
    4. Note any obvious problems visible in the list: duplicates, very short entries, entries without temporal anchors.
    
    Do not modify anything in this phase. The goal is to understand what you are working with.
    
    ## Phase 2: Gather Targets
    
    Identify which memories need action. Use the tools to investigate.
    
    **Search for recent additions:**
    Call `memory_search` with a `created_at` filter to find memories added since the last consolidation. These are the most likely to need merging or cleanup.
    
    **Classify each target into one of these actions:**
    - DELETE: contains credentials, expired by TTL, pure noise, raw tool output, standalone timestamps
    - MERGE: two or more memories express the same fact in different words, or a series tracks incremental changes to the same entity
    - REWRITE: vague, missing temporal anchor, uses first person instead of third, wrong category, overly verbose
    
    ## Phase 3: Consolidate
    
    Execute the actions identified in Phase 2. Work in this priority order:
    
    ### 3a. Delete dangerous and expired entries
    
    Delete immediately using `memory_delete`:
    - Credentials, API keys, tokens, passwords, secrets (matching known credential prefixes and auth patterns injected by the plugin at runtime)
    - Pure timestamps with no context
    - Raw tool output stored as memory
    - Heartbeat or cron execution records
    - Generic acknowledgments stored as memory ("ok", "got it")
    - Operational memories older than 7 days
    - Project memories older than 90 days
    
    ### 3b. Merge duplicates
    
    When two or more memories express the same fact:
    1. Pick the most complete version as the base
    2. Call `memory_update` on the best version to incorporate missing details from the others
    3. Call `memory_delete` on the redundant entries
    
    `memory_update` is preferred over forget-then-store because it is atomic and preserves edit history.
    
    When merging, follow these rules:
    - Keep the user's original words for opinions and preferences
    - Preserve temporal anchors from both versions
    - Do not exceed 50 words in the merged result
    - The merged memory must be self-contained (understandable without the deleted ones)
    
    ### 3c. Rewrite unclear entries
    
    When a memory needs improvement but is not a duplicate:
    1. Call `memory_update` with the improved text
    
    Rewrite when:
    - Memory uses first person ("I prefer") instead of third ("User prefers")
    - Memory lacks a temporal anchor for time-sensitive information
    - Memory is vague ("likes python") and can be made specific ("User prefers Python for backend development")
    - Memory has the wrong category assignment
    - Memory is over 50 words and can be compressed without losing information
    
    ## Phase 4: Report
    
    After completing all operations, summarize what you did:
    
    ```
    Consolidation complete.
    - Reviewed: [total count]
    - Deleted (credentials/secrets): [count]
    - Deleted (expired/stale): [count]
    - Merged: [count] groups into [count] memories
    - Rewritten: [count]
    - Final count: [total remaining]
    - Issues found: [any notable problems or observations]
    ```
    
    ## Quality Targets
    
    After consolidation, the memory store should have:
    - Zero memories containing credentials or secrets
    - Zero duplicate memories (same fact in different words)
    - All project and operational memories have temporal anchors ("As of YYYY-MM-DD")
    - All memories use third person voice
    - All memories are correctly categorized
    - Each memory is 15-50 words, self-contained, and atomic (one fact per memory)
    
  • openclaw/skills/memory-triage/SKILL.mdskill
    Show content (20379 bytes)
    ---
    name: memory-triage
    description: >
      Persistent long-term memory protocol powered by mem0.
      Evaluate conversations for durable facts worth storing via memory_add.
      Handles identity, preferences, decisions, configurations, rules,
      projects, and relationships. Loaded by the openclaw-mem0 plugin when skills mode is active.
    user-invocable: false
    metadata:
      {"openclaw": {"always": false, "injected": true, "emoji": "🧠", "requires": {"env": ["MEM0_API_KEY", "OPENAI_API_KEY", "ANTHROPIC_API_KEY"], "bins": []}}}
    ---
    
    # Memory Protocol
    
    You have persistent long-term memory powered by mem0. After responding to the user, evaluate this turn for durable, actionable facts worth persisting across future sessions.
    
    Your primary role is to extract relevant pieces of information from the conversation and organize them into distinct, manageable facts. This allows for easy retrieval and personalization in future interactions.
    
    **The core question**: "Would a new agent β€” with no prior context β€” benefit from knowing this?" If no β†’ do nothing. Most turns produce zero memory operations. That is correct and expected.
    
    ## Available Tools
    
    ### memory_search
    Semantic search across stored memories.
    - `query` (required): search query
    - `limit`: max results (default: configured topK)
    - `userId`, `agentId`: scope overrides
    - `scope`: `"all"` (default), `"session"`, or `"long-term"`
    - `categories`: filter by category array
    - `filters`: advanced filter object
    
    ### memory_add
    Store new facts in long-term memory.
    - `facts` (required): array of facts to store β€” ALL must share the same category
    - `text`: alternative single-fact string
    - `category`: `"identity"`, `"preference"`, `"decision"`, `"rule"`, `"project"`, `"configuration"`, `"technical"`, `"relationship"`
    - `importance`: 0.0–1.0 (omit for category default)
    - `userId`, `agentId`: scope overrides
    - `metadata`: additional key-value metadata
    - `longTerm`: true (default) for persistent, false for session-scoped
    
    ### memory_get
    Retrieve a single memory by ID.
    - `memoryId` (required): the memory ID
    
    ### memory_list
    List all stored memories for a user or agent.
    - `userId`, `agentId`: scope overrides
    - `scope`: `"all"` (default), `"session"`, or `"long-term"`
    
    ### memory_update
    Update an existing memory's text in place. Atomic and preserves edit history.
    - `memoryId` (required): the memory ID to update
    - `text` (required): the new text (replaces old)
    
    ### memory_delete
    Delete memories by ID, query, or bulk.
    - `memoryId`: specific memory ID to delete
    - `query`: search query to find and delete matching memories
    - `all`: delete ALL memories (requires `confirm: true`)
    - `confirm`: safety gate for bulk operations
    - `userId`, `agentId`: scope overrides
    
    ### memory_event_list
    List recent background processing events (platform mode only).
    
    ### memory_event_status
    Get status of a specific background event.
    - `event_id` (required): the event ID to check
    
    ## Decision Gate
    
    Every candidate fact must pass ALL four gates:
    
    **Gate 1 β€” FUTURE UTILITY**: Would this matter to a new agent days or weeks from now?
      - Pass: identity, configurations, standing rules, preferences with rationale, decisions, project milestones, relationships, important personal details
      - Fail: tool outputs, status checks, one-time commands, transient state, small talk, generic responses β†’ SKIP
    
    **Gate 2 β€” NOVELTY**: Check your recalled memories below β€” is this already known?
      - Already known and unchanged β†’ SKIP
      - Known but materially changed β†’ UPDATE (find old β†’ update in place)
      - Genuinely new β†’ proceed
      - **Material difference test**: Only UPDATE if new information adds real context, details, or changes meaning. Cosmetic differences (synonyms, rephrasing, punctuation) are NOT updates. "Loves daily walks" vs "enjoys daily walks" = no material change = SKIP.
    
    **Gate 3 β€” FACTUAL**: Is this a concrete, actionable fact β€” not a vague statement or question?
      - Pass: specific names, configs, choices with rationale, deadlines, system states, plans, preferences
      - Fail: vague impressions, questions, small talk, acknowledgments, generic assistant responses ("Sure, I can help") β†’ SKIP
    
    **Gate 4 β€” SAFE**: Does this contain ANY credential, secret, or token?
      - Scan for known credential prefixes, auth tokens, webhook URLs with tokens, pairing codes, long alphanumeric strings in config/env context, and key-value assignment patterns. The plugin injects the full pattern list at runtime.
      - ANY match β†’ NEVER STORE the value. Instead, store that the credential was configured:
        - WRONG: "User's API key is [redacted]"
        - RIGHT: "API key was configured for the service (as of 2026-03-30)"
      - When in doubt β†’ SKIP. No exceptions.
    
    All four gates must pass. If any fails β†’ do nothing.
    
    ## What to Extract (Priority Order)
    
    ### 1. Configuration & System State (importance: 0.95 | permanent)
    Tools/services configured, installed, or removed (with versions/dates). Model assignments for agents. Cron schedules, automation pipelines, deployment configs. Architecture decisions. Specific identifiers: file paths, sheet IDs, channel IDs, machine specs.
    ```
    "User's Tailscale machine 'mac' (IP 100.71.135.41) is configured under beau@rizedigital.io (as of 2026-02-20)"
    "User's executive orchestrator agent Quin runs on Claude Opus, heartbeat every 10 min"
    ```
    
    ### 2. Standing Rules & Policies (importance: 0.90 | permanent)
    Explicit user directives about behavior. Workflow policies. Security constraints, permission boundaries. Always capture the reason.
    ```
    "User rule: never create accounts without explicit user consent. Reason: security policy"
    "User rule: each agent must review model selection before completing a task"
    ```
    
    ### 3. Identity & Demographics (importance: 0.95 | permanent)
    Name, location, timezone, language preferences. Occupation, employer, job role, industry. Keep related facts together in a single memory.
    ```
    "User is Chris, senior platform engineer at Mem0, based in EST timezone"
    ```
    
    ### 4. Preferences & Opinions (importance: 0.85 | permanent)
    Communication style, tool preferences, technology opinions. Always capture the WHY when stated. Preserve the user's exact words for feelings and opinions.
    ```
    "User prefers Cursor over VS Code for AI-assisted coding because of inline completions"
    "User prefers terse responses with no trailing summaries"
    ```
    
    ### 5. Goals, Projects & Milestones (importance: 0.75 | expires: 90 days)
    Active projects with name, description, current status. Completed milestones with dates. Deadlines, roadmaps, progress.
    ```
    "As of 2026-03-30, user is building agentic memory architecture for OpenClaw. Status: active development, team demo planned early April"
    "ElevenLabs voice integration fully configured as of 2026-02-20"
    ```
    
    ### 6. Technical Context (importance: 0.80 | permanent)
    Tech stack, development environment, agent ecosystem structure (names, roles, relationships). Skill levels.
    ```
    "User's stack: Python/Django backend, Next.js 15 frontend, PostgreSQL with pgvector, deployed on EKS"
    ```
    
    ### 7. Relationships & People (importance: 0.75 | permanent)
    Names and roles of people mentioned. Team structure, key contacts.
    ```
    "Deshraj owns the frontend, Taranjeet owns the backend platform at Mem0"
    ```
    
    ### 8. Decisions & Lessons (importance: 0.80 | permanent)
    Important decisions made with reasoning. Lessons learned. Strategies that worked or failed.
    ```
    "As of 2026-03-30, user decided to use infer=false for all skill-based memory storage β€” agent extracts, mem0 stores directly without re-extraction"
    ```
    
    ## CRITICAL: Memory Completeness and Self-Containment
    
    Each memory you store must be a **self-contained, independently understandable fact**. This is the single most important quality rule.
    
    ### Entity-Based Grouping
    
    **ALWAYS group all information about the same entity, concept, event, or subject into a SINGLE unified memory.** If multiple pieces of information refer to the same entity (e.g., a conference, a project, a person, a system), they MUST be combined into one comprehensive memory.
    
    **DO NOT split requirements, specifications, or details about the same entity across multiple memory_add calls.** Even if information is phrased differently ("Budget for X", "X requires Y", "X needs Z"), if they all refer to the same entity, combine ALL into ONE call.
    
    **WRONG** β€” fragmented into separate facts:
    ```
    memory_add(facts: ["Conference requires at least 4 breakout rooms", "Conference requires vegan options", "Conference requires parking"], category: "project")
    ```
    
    **CORRECT** β€” grouped into one self-contained fact:
    ```
    memory_add(facts: ["Conference requires at least 4 breakout rooms for 30-40 people each, robust vegan and vegetarian options with allergen-free alternatives, parking for at least 100 vehicles, venue within walking distance of transit"], category: "project")
    ```
    
    **WRONG** β€” same entity split into separate facts:
    ```
    memory_add(facts: ["Budget is $150-175 per person for TechForward event", "TechForward event requires strong WiFi", "TechForward event requires hybrid capabilities"], category: "project")
    ```
    
    **CORRECT** β€” combined into one fact about TechForward:
    ```
    memory_add(facts: ["TechForward event has a budget of $150-175 per person per day including venue rental, standard AV setup, and catering. Requires strong WiFi and hybrid event capabilities for remote attendees."], category: "project")
    ```
    
    **Only create separate memories when information refers to genuinely different entities, concepts, or unrelated topics** (e.g., "TechForward event" vs "Marketing campaign" are separate).
    
    ### No Pronouns β€” Use Specific Names
    
    DO NOT create memories that rely on pronouns (they, them, he, she, it). Always use specific names and entities.
    
    - **WRONG**: "They work at Google" and "They live in San Francisco"
    - **CORRECT**: "John works at Google and lives in San Francisco"
    
    ### No Inference
    
    Do not infer unstated attributes (gender, age, ethnicity, beliefs) from names or context.
    - **WRONG**: "Kiran's sister visited him last week"
    - **CORRECT**: "Kiran's sister visited last week"
    
    ### No Assistant Attribution
    
    Do not store characterizations from assistant messages (e.g., "user seems excited") unless the user explicitly confirmed them.
    
    ## How to Store
    
    Use `memory_add` with the `facts` array. All facts in one call MUST share the same category because category determines retention policy (TTL, immutability).
    
    ```
    memory_add(
      facts: ["fact one in third person", "fact two in third person"],
      category: "identity"
    )
    ```
    
    If a turn produces facts in different categories, make one call per category:
    
    ```
    memory_add(facts: ["User is Alex, senior engineer at Stripe, PST timezone"], category: "identity")
    memory_add(facts: ["As of 2026-04-01, user decided to migrate from Postgres to CockroachDB"], category: "decision")
    ```
    
    Categories: `identity`, `configuration`, `rule`, `preference`, `decision`, `technical`, `relationship`, `project`
    
    ### Storage Principles
    
    **15-50 WORDS per fact**: Each fact should be 1-2 sentences. If combining would exceed this, consolidate into key facts rather than creating a paragraph. Distill rather than append.
    
    **OUTCOMES OVER INTENT**: Extract what WAS DONE, not what was requested.
      - GOOD: "Call scripts sheet (ID: 146Qbb...) was updated with truth-based templates"
      - BAD: "User wants to update call scripts"
    
    **TEMPORAL ANCHORING**: Time-sensitive facts MUST include "As of YYYY-MM-DD, ..."
      - If no date available, note "date unknown" rather than omitting.
      - Extract dates from conversation context or the current date.
    
    **PRESERVE USER'S WORDS**: When the user expresses feelings, opinions, or preferences, keep their exact phrasing.
      - GOOD: "User says daily walks with Poppy are the best part of their day"
      - BAD: "User finds emotional significance in walking their dog"
    
    **THIRD PERSON**: "User prefers..." not "I prefer..."
    
    **NO PRONOUNS**: Use specific names and entities. Not "they" or "it."
    
    **PRESERVE LANGUAGE**: If the user speaks Spanish, store in Spanish. Do not translate.
    
    **BATCH BY CATEGORY**: Group all same-category facts into one call. Different categories require separate calls. Most turns need zero or one call.
    
    ### Updating Existing Memories
    
    When a recalled memory needs updating (fact changed, status changed, new detail added):
    1. `memory_search` to find the existing memory
    2. `memory_update` on the memory's ID with the corrected/expanded text
    
    `memory_update` is preferred over delete+add because it is **atomic and preserves edit history**.
    
    **Choose the MORE COMPLETE version.** When both old and new have unique context, COMBINE them into a unified memory using the user's stated words.
    
    **Material difference test**: Only update if the new version adds real information.
      - "User likes Python" β†’ "User prefers Python for backend services because of async support" = material update (added rationale, specificity)
      - "User likes Python" β†’ "User enjoys Python" = NOT material = SKIP
      - When both have unique context, combine: Old "Trip to Paris in September with Jack" + New "User can't wait to visit Eiffel Tower" β†’ "Trip to Paris in September 2025 with friend Jack, user says they can't wait to visit the Eiffel Tower and try authentic French pastries"
    
    **Consolidation**: When a rich new fact encompasses multiple existing memories, `memory_update` the best one to the comprehensive version and `memory_delete` the rest.
      - Old: "User has a dog" + "Dog's name is Poppy" + "User walks dog daily"
      - New: "User has a dog named Poppy and says taking him for walks is the best part of their day"
      - Action: `memory_update` the best version with consolidated text, `memory_delete` the redundant ones
    
    **Temporary vs permanent changes**: A temporary constraint (e.g., injury pausing a hobby) does NOT contradict the underlying preference. Store the constraint as a new memory; don't delete the preference.
      - Old: "User enjoys hiking on weekends"
      - New: "User has temporarily paused hiking due to knee injury"
      - Action: store the new constraint, leave old preference untouched
    
    ## What NEVER to Store
    
    - **Credentials and secrets** β€” even embedded in config blocks, setup logs, or tool output. Includes any known credential prefixes, auth tokens, bearer tokens, webhook URLs with tokens, pairing codes, and long alphanumeric strings in config/env contexts. Record that the credential was configured, never the value itself.
    - **Raw tool output** β€” bash results, file contents, API responses, logs, diffs, test output. Extract only the durable OUTCOME or ROOT CAUSE.
    - **One-time commands** β€” "stop the script", "continue where you left off", "run this"
    - **Acknowledgments and emotional reactions** β€” "ok", "sure", "sounds good", "sir", "got it", "thanks", "you're right"
    - **Transient UI/navigation states** β€” "user is in admin panel", "relay is attached"
    - **Ephemeral process status** β€” "download at 50%", "daemon not running", "still syncing"
    - **Cron heartbeat outputs** β€” NO_REPLY, HEARTBEAT_OK, compaction directives
    - **Timestamps as standalone facts** β€” "Current time is 3:25 PM" is NEVER worth storing. But DO use timestamps to anchor other facts.
    - **System routing metadata** β€” message IDs, sender IDs, channel routing info
    - **Generic small talk** β€” no informational content
    - **Raw code snippets** β€” capture the intent/decision, not the code itself
    - **Information the user explicitly asks not to remember**
    - **Facts already in recalled memories that haven't materially changed**
    - **Generic assistant responses** β€” "Sure, I can help", "How can I assist you?"
    
    ## Worked Examples
    
    ### Example 1: Configuration extraction (entity-grouped)
    ```
    User: "I set up the research agent on Claude Sonnet with a 30-min cron. It checks HackerNews and sends summaries to #research-feed in Slack."
    Agent: [responds helpfully]
    β†’ memory_add(facts: ["User's research agent runs on Claude Sonnet, cron every 30 minutes, monitors HackerNews and posts summaries to Slack #research-feed"], category: "configuration")
    ```
    
    ### Example 2: NOOP β€” tool output
    ```
    User: "Run the healthcheck on all services"
    Agent: [executes healthcheck, returns results]
    β†’ No memory operations. Tool output fails Gate 1.
    ```
    
    ### Example 3: NOOP β€” already recalled, no material change
    ```
    Recalled: ["User is Chris, senior platform engineer at Mem0"]
    User: "Hey Chris here again"
    β†’ No memory operations. Already known, no material change.
    ```
    
    ### Example 4: Rule with rationale (preserving user's words)
    ```
    User: "Never use Docker for local dev, it ate 40GB of disk last time and my Mac mini only has 256GB"
    β†’ memory_add(facts: ["User rule: avoid Docker for local dev. Reason: ate 40GB of disk on 256GB Mac mini"], category: "rule")
    ```
    
    ### Example 5: UPDATE β€” combining contexts from both versions
    ```
    Recalled: ["As of 2026-03-15, user is planning trip to Paris in September with friend Jack"]
    User: "Can't wait for the Paris trip, definitely want to hit the Eiffel Tower and try authentic French pastries"
    β†’ memory_search("Paris trip planning")
    β†’ memory_update(memoryId: "mem-id-of-old", text: "As of 2026-03-30, user is planning trip to Paris in September 2025 with friend Jack, says they can't wait to visit the Eiffel Tower and try authentic French pastries")
    ```
    
    ### Example 6: Outcome over intent
    ```
    User: "Update the call scripts sheet with the new truth-based templates"
    Agent: [updates the sheet successfully]
    β†’ memory_add(facts: ["Call scripts sheet (ID: 146Qbb...) was updated with truth-based templates (as of 2026-03-30)"], category: "configuration")
    ```
    
    ### Example 7: Credential β€” store the fact, not the value
    ```
    User: "Use this API key for the new service: [credential value]"
    Agent: [configures the service]
    β†’ memory_add(facts: ["API key was configured for the new service (as of 2026-03-30)"], category: "configuration")
    ```
    
    ### Example 8: NOOP β€” cosmetic difference, not material
    ```
    Recalled: ["User has a dog named Poppy and enjoys their daily walks together"]
    User: "Yeah me and Poppy love our daily walks"
    β†’ No memory operations. Semantically equivalent. No new context.
    ```
    
    ### Example 9: Entity grouping β€” single call, not fragmented
    ```
    User: "The budget for the offsite is $200 per head. We need a venue with WiFi, parking for 50 cars, and a projector."
    β†’ memory_add(facts: ["Team offsite budget is $200 per person. Venue requirements: WiFi, parking for 50 vehicles, and projector setup."], category: "project")
    All details about the same entity (offsite) go in one fact, one call.
    ```
    
    ### Example 10: Temporary constraint β€” don't delete the preference
    ```
    Recalled: ["User enjoys hiking on weekends and finds it therapeutic"]
    User: "I hurt my knee last week, can't hike for a while"
    β†’ memory_add(facts: ["As of 2026-03-30, user has temporarily paused hiking due to knee injury"], category: "project")
    DO NOT delete the hiking preference. It is temporarily paused, not contradicted.
    ```
    
    ### Example 11: Mixed categories in one turn β€” separate calls
    ```
    User: "I'm Sarah, I work at Cloudflare. I just decided to switch our monitoring from Datadog to Grafana because of cost."
    β†’ memory_add(facts: ["User is Sarah, works at Cloudflare"], category: "identity")
    β†’ memory_add(facts: ["As of 2026-03-30, user decided to switch monitoring from Datadog to Grafana due to cost"], category: "decision")
    Two calls because identity and decision have different retention policies.
    ```
    
    ### Example 12: NOOP β€” generic greeting
    ```
    User: "Hi"
    Agent: "Hello! How can I help?"
    β†’ No memory operations. No extractable facts.
    ```
    
    ### Example 11: Consolidation β€” rich memory absorbs atomic ones
    ```
    Recalled: ["User has a dog", "Dog's name is Poppy", "User walks dog daily"]
    User: "Poppy learned fetch! Our walks are even better now, honestly it's the best part of my day"
    β†’ memory_search("dog Poppy walks") β†’ find all three old memory IDs
    β†’ memory_update(memoryId: "id-1", text: "User has a dog named Poppy and says taking him for walks is the best part of their day. Poppy recently learned fetch, making walks more enjoyable.")
    β†’ memory_delete(memoryId: "id-2"), memory_delete(memoryId: "id-3")
    ```
    
    ### Example 12: NOOP β€” generic greeting, nothing to store
    ```
    User: "Hi"
    Agent: "Hello! How can I help?"
    β†’ No memory operations. No extractable facts.
    ```
    
  • skills/mem0-cli/SKILL.mdskill
    Show content (4981 bytes)
    ---
    name: mem0-cli
    description: >
      Mem0 CLI -- the command-line interface for mem0 memory operations.
      TRIGGER when: user mentions "mem0 cli", "mem0 command line", "@mem0/cli",
      "mem0-cli", "pip install mem0-cli", "npm install -g @mem0/cli", or is running
      mem0 commands in a terminal/shell (mem0 add, mem0 search, mem0 list, mem0 get,
      mem0 init, mem0 config, mem0 import). Also triggers when query includes CLI flags
      like --user-id, --output, --json, --agent, or describes bash/zsh/terminal/shell usage.
      DO NOT TRIGGER when: user asks about programmatic SDK integration in Python/TS
      code (use mem0 skill), or Vercel AI SDK provider (use mem0-vercel-ai-sdk skill).
    license: Apache-2.0
    metadata:
      author: mem0ai
      version: "1.1.0"
      category: ai-memory
      tags: "cli, terminal, memory, ai, command-line"
    compatibility: Node.js 18+ (npm install -g @mem0/cli) or Python 3.10+ (pip install mem0-cli), MEM0_API_KEY env var
    ---
    
    # Mem0 CLI
    
    The official command-line interface for the Mem0 memory platform. Add, search, list, update, and delete memories from the terminal -- for developers, AI agents, and CI/CD pipelines.
    
    ## Install
    
    **Node.js (npm):**
    ```bash
    npm install -g @mem0/cli
    ```
    
    **Python (pip):**
    ```bash
    pip install mem0-cli
    ```
    
    Both packages install a `mem0` binary with identical commands, options, and output formats.
    
    ## Setup
    
    **Interactive wizard:**
    ```bash
    mem0 init
    ```
    
    **Or set the environment variable directly:**
    ```bash
    export MEM0_API_KEY="m0-xxx"
    ```
    
    Get an API key at: https://app.mem0.ai/dashboard/api-keys?utm_source=oss&utm_medium=skill-mem0-cli
    
    ## Quick Reference
    
    ### Add a memory
    ```bash
    mem0 add "I prefer dark mode" --user-id alice
    ```
    
    ### Search memories
    ```bash
    mem0 search "preferences" --user-id alice
    ```
    
    ### List all memories for a user
    ```bash
    mem0 list --user-id alice
    ```
    
    ### Get a specific memory
    ```bash
    mem0 get <memory-id>
    ```
    
    ### Update a memory
    ```bash
    mem0 update <memory-id> "new text"
    ```
    
    ### Delete a single memory
    ```bash
    mem0 delete <memory-id>
    ```
    
    ### Delete all memories for a user
    ```bash
    mem0 delete --all --user-id alice --force
    ```
    
    ## Agent / JSON Mode
    
    Use `--json` or `--agent` to get structured output suitable for LLM consumption. Every command wraps its response in a standard envelope:
    
    ```json
    {
      "status": "success",
      "command": "search",
      "duration_ms": 245,
      "scope": { "user_id": "alice" },
      "count": 3,
      "error": null,
      "data": [
        { "id": "mem-abc", "memory": "User prefers dark mode", "score": 0.92 }
      ]
    }
    ```
    
    On error:
    ```json
    {
      "status": "error",
      "command": "search",
      "error": "Authentication failed. Your API key may be invalid or expired.",
      "data": null
    }
    ```
    
    The `--agent` flag is an alias for `--json`. Both write spinners and progress to stderr so stdout is always clean, parseable JSON.
    
    ## Node and Python Parity
    
    Both the Node.js (`@mem0/cli`) and Python (`mem0-cli`) CLIs are implemented from the same specification (`cli-spec.json`). They share:
    
    - Identical command names, arguments, and flags
    - Identical output formats (text, json, table, quiet)
    - Identical entity ID resolution, graph tri-state, filter building
    - Identical error messages and exit codes
    
    Choose whichever runtime you already have installed. The behavior is the same.
    
    ## Common Edge Cases
    
    - **Async processing delay:** After `mem0 add`, memories process asynchronously. Wait 2-3 seconds before searching for newly added content. Use `mem0 event list` to check processing status.
    - **`--all` vs `--entity` delete modes:** `mem0 delete --all -u alice` deletes all memories for user alice. `mem0 delete --entity -u alice` deletes the entity itself AND all its memories (cascade). These are mutually exclusive modes.
    - **Entity ID resolution:** If you pass any explicit scope flag (e.g. `--user-id`), the CLI uses ONLY the explicit IDs and ignores config defaults. If no scope flags are given, all configured defaults apply.
    - **Stdin detection:** When no text argument is provided and input is piped (not a TTY), the CLI reads from stdin. Works with `add`, `search`, and `update`.
    
    ## References
    
    Load these on demand for deeper detail:
    
    | Topic | File |
    |-------|------|
    | Command reference (all commands, flags, options, examples) | [references/command-reference.md](references/command-reference.md) |
    | Configuration (config file, env vars, precedence, init wizard) | [references/configuration.md](references/configuration.md) |
    | Workflows (piping, scripting, CI/CD, agent mode recipes) | [references/workflows.md](references/workflows.md) |
    
    ## Related Mem0 Skills
    
    | Skill | When to use | Link |
    |-------|-------------|------|
    | mem0 | Python/TypeScript SDK, REST API, framework integrations | [local](../mem0/SKILL.md) / [GitHub](https://github.com/mem0ai/mem0/tree/main/skills/mem0) |
    | mem0-vercel-ai-sdk | Vercel AI SDK provider with automatic memory | [local](../mem0-vercel-ai-sdk/SKILL.md) / [GitHub](https://github.com/mem0ai/mem0/tree/main/skills/mem0-vercel-ai-sdk) |
    
  • mem0-plugin/skills/mem0-mcp/SKILL.mdskill
    Show content (5553 bytes)
    ---
    name: mem0-mcp
    description: >
      Mem0 memory protocol for agents using the mem0 MCP tools (Claude Code, Cursor,
      Codex, and any other MCP-aware runtime). Decide deliberately when memory context
      would help, run targeted searches with metadata filters when it would, and store
      key learnings as work completes. Use the mem0 MCP tools (add_memory,
      search_memories, get_memories, etc.) for all memory operations.
    ---
    
    # Mem0 MCP Memory Protocol
    
    You have access to persistent memory via the mem0 MCP tools. Follow this protocol to maintain context across sessions.
    
    ## On every new task
    
    Decide whether persistent memory context would improve your response, then act accordingly. Don't search by default β€” search deliberately.
    
    ### Decide: search or skip?
    
    **Search WHEN** the user:
    - references past work, decisions, or things "we" built
    - asks "how should we...", "best way to...", or any decision-style question
    - hits an error, bug, or asks for debugging help
    - requests work that touches their stack, tools, conventions, or preferences
    - starts a non-trivial task in a known project
    
    **Skip WHEN:**
    - the prompt is an acknowledgement or continuation ("ok", "thanks", "continue")
    - the user is *stating* new info β€” that's a write trigger (`add_memory`), not a search
    - it's a pure syntax / factual question answerable from general knowledge
    - you already searched this scope earlier in the turn
    
    Empty results are normal. Proceed without context β€” they don't mean the system is broken.
    
    ### How to search well
    
    When you do search, run **2–4 parallel** `search_memories` calls at different angles instead of one query echoing the user's prompt.
    
    **Query phrasing:**
    - Use **nouns**, not sentences. `"auth module decisions"` beats `"what did we decide about auth"`.
    - Strip conversational filler. *"remember when we picked Postgres?"* β†’ search `"Postgres choice"`.
    - Use entity names, not pronouns. Resolve "that thing" from recent context first.
    - Don't search on meta-questions ("what was that?") β€” use recent context or `get_memories` ordered by `created_at`.
    
    **Metadata filters** match the same `type` values written under "After completing significant work" below.
    
    Two rules from the v2 filter spec:
    
    1. The root **must** be a logical operator (`AND` / `OR` / `NOT`) with an array. A bare `{"user_id": "..."}` won't work.
    2. Metadata uses a **nested** object, not a dotted key. `{"metadata": {"type": "decision"}}`, never `{"metadata.type": "decision"}`. Only top-level metadata keys are filterable.
    
    Combine `user_id` with one metadata clause per call:
    
    | `metadata.type` clause | Use for |
    |--------|---------|
    | `{"metadata": {"type": "decision"}}` | design / architecture / "how should we" questions |
    | `{"metadata": {"type": "anti_pattern"}}` | debugging, error handling, things that failed before |
    | `{"metadata": {"type": "user_preference"}}` | tooling, stack, style β€” always include for code work |
    | `{"metadata": {"type": "convention"}}` | established patterns in this project |
    
    Full filter (replace `<your_user_id>` with the active user_id from your runtime):
    ```python
    filters={"AND": [{"user_id": "<your_user_id>"}, {"metadata": {"type": "decision"}}]}
    ```
    
    ### Worked example
    
    User asks: *"Refactor the auth module to use JWT."*
    
    Don't:
    ```python
    search_memories(query="Refactor the auth module to use JWT")
    # Hits whatever shares words. Misses prior decisions and preferences.
    ```
    
    Do (parallel β€” substitute the active `user_id` for `<your_user_id>`):
    ```python
    search_memories(query="auth module decisions",
                    filters={"AND": [{"user_id": "<your_user_id>"}, {"metadata": {"type": "decision"}}]})
    search_memories(query="JWT",
                    filters={"AND": [{"user_id": "<your_user_id>"}]})
    search_memories(query="auth refactor failures",
                    filters={"AND": [{"user_id": "<your_user_id>"}, {"metadata": {"type": "anti_pattern"}}]})
    search_memories(query="auth",
                    filters={"AND": [{"user_id": "<your_user_id>"}, {"metadata": {"type": "user_preference"}}]})
    ```
    
    ## After completing significant work
    
    Extract key learnings and store them using the `add_memory` tool:
    
    - **Decisions made** -> Include metadata `{"type": "decision"}`
    - **Strategies that worked** -> Include metadata `{"type": "task_learning"}`
    - **Failed approaches** -> Include metadata `{"type": "anti_pattern"}`
    - **User preferences observed** -> Include metadata `{"type": "user_preference"}`
    - **Environment/setup discoveries** -> Include metadata `{"type": "environmental"}`
    - **Conventions established** -> Include metadata `{"type": "convention"}`
    
    Memories can be as detailed as needed -- include full context, reasoning, code snippets, file paths, and examples. Longer, searchable memories are more valuable than vague one-liners.
    
    ## Before losing context
    
    If context is about to be compacted or the session is ending, store a comprehensive session summary:
    
    ```
    ## Session Summary
    
    ### User's Goal
    [What the user originally asked for]
    
    ### What Was Accomplished
    [Numbered list of tasks completed]
    
    ### Key Decisions Made
    [Architectural choices, trade-offs discussed]
    
    ### Files Created or Modified
    [Important file paths with what changed]
    
    ### Current State
    [What is in progress, pending items, next steps]
    ```
    
    Include metadata: `{"type": "session_state"}`
    
    ## Memory hygiene
    
    - Do NOT write to MEMORY.md or any file-based memory. Use mem0 MCP tools exclusively.
    - Only store genuinely useful learnings. Skip trivial interactions.
    - Use specific, searchable language in memory content.
    
  • mem0-plugin/skills/mem0/SKILL.mdskill
    Show content (7243 bytes)
    ---
    name: mem0
    description: >
      Mem0 Platform SDK for adding persistent memory to AI applications.
      TRIGGER when: user mentions "mem0", "MemoryClient", "memory layer",
      "remember user preferences", "persistent context", "personalization",
      or needs to add long-term memory to chatbots, agents, or AI apps.
      Covers Python SDK (mem0ai), TypeScript SDK (mem0ai), and framework integrations
      (LangChain, CrewAI, OpenAI Agents SDK, Pipecat, LlamaIndex, AutoGen, LangGraph).
      Also covers the open-source self-hosted Memory class.
      This is the DEFAULT mem0 skill for ambiguous queries.
      DO NOT TRIGGER when: user asks about CLI commands, terminal usage, or shell
      scripts (use mem0-cli), or Vercel AI SDK / @mem0/vercel-ai-provider / createMem0
      (use mem0-vercel-ai-sdk).
    license: Apache-2.0
    metadata:
      author: mem0ai
      version: "0.1.1"
      category: ai-memory
      tags: "memory, personalization, ai, python, typescript, vector-search"
    compatibility: Requires Python 3.10+ or Node.js 18+, pip install mem0ai or npm install mem0ai, MEM0_API_KEY env var (Platform), and internet access to api.mem0.ai. Uses Mem0 v3 API.
    ---
    
    # Mem0 Platform Integration
    
    > **Skill Graph:** This skill is part of the Mem0 skill graph:
    > - **mem0** (this skill) -- Platform Client SDK + OSS (Python + TypeScript)
    > - **[mem0-cli](https://github.com/mem0ai/mem0/tree/main/skills/mem0-cli)** -- Command-line interface
    > - **[mem0-vercel-ai-sdk](https://github.com/mem0ai/mem0/tree/main/skills/mem0-vercel-ai-sdk)** -- Vercel AI SDK provider
    
    Mem0 is a managed memory layer for AI applications. It stores, retrieves, and manages user memories via API β€” no infrastructure to deploy. For self-hosted usage, see the OSS section in the client references below.
    
    ## Step 1: Install and authenticate
    
    **Python:**
    ```bash
    pip install mem0ai
    export MEM0_API_KEY="m0-your-api-key"
    ```
    
    **TypeScript/JavaScript:**
    ```bash
    npm install mem0ai
    export MEM0_API_KEY="m0-your-api-key"
    ```
    
    Get an API key at: https://app.mem0.ai/dashboard/api-keys?utm_source=oss&utm_medium=mem0-plugin-skill
    
    ## Step 2: Initialize the client
    
    **Python:**
    ```python
    from mem0 import MemoryClient
    client = MemoryClient(api_key="m0-xxx")
    ```
    
    **TypeScript:**
    ```typescript
    import MemoryClient from 'mem0ai';
    const client = new MemoryClient({ apiKey: 'm0-xxx' });
    ```
    
    For async Python, use `AsyncMemoryClient`.
    
    ## Step 3: Core operations
    
    Every Mem0 integration follows the same pattern: **retrieve β†’ generate β†’ store**.
    
    ### Add memories
    ```python
    messages = [
        {"role": "user", "content": "I'm a vegetarian and allergic to nuts."},
        {"role": "assistant", "content": "Got it! I'll remember that."}
    ]
    client.add(messages, user_id="alice")
    ```
    
    ### Search memories
    ```python
    results = client.search("dietary preferences", filters={"user_id": "alice"})
    for mem in results.get("results", []):
        print(mem["memory"])
    ```
    
    ### Get all memories
    ```python
    all_memories = client.get_all(filters={"user_id": "alice"})
    ```
    
    ### Update a memory
    ```python
    client.update("memory-uuid", text="Updated: vegetarian, nut allergy, prefers organic")
    ```
    
    ### Delete a memory
    ```python
    client.delete("memory-uuid")
    client.delete_all(user_id="alice")  # delete all for a user
    ```
    
    ## Common integration pattern
    
    ```python
    from mem0 import MemoryClient
    from openai import OpenAI
    
    mem0 = MemoryClient()
    openai = OpenAI()
    
    def chat(user_input: str, user_id: str) -> str:
        # 1. Retrieve relevant memories
        memories = mem0.search(user_input, filters={"user_id": user_id})
        context = "\n".join([m["memory"] for m in memories.get("results", [])])
    
        # 2. Generate response with memory context
        response = openai.chat.completions.create(
            model="gpt-5-mini",
            messages=[
                {"role": "system", "content": f"User context:\n{context}"},
                {"role": "user", "content": user_input},
            ]
        )
        reply = response.choices[0].message.content
    
        # 3. Store interaction for future context
        mem0.add(
            [{"role": "user", "content": user_input}, {"role": "assistant", "content": reply}],
            user_id=user_id
        )
        return reply
    ```
    
    ## Common edge cases
    
    - **Search returns empty:** Memories process asynchronously. Wait 2-3s after `add()` before searching. Also verify `user_id` matches exactly (case-sensitive) and use `filters={"user_id": "..."}` syntax.
    - **AND filter with user_id + agent_id returns empty:** Entities are stored separately. Use `OR` instead, or query separately.
    - **Duplicate memories:** Don't mix `infer=True` (default) and `infer=False` for the same data. Stick to one mode.
    - **Wrong import:** Always use `from mem0 import MemoryClient` (or `AsyncMemoryClient` for async). Do not use `from mem0 import Memory`.
    - **v3 defaults:** `top_k=20`, `threshold=0.1`, `rerank=False`. Adjust as needed for your use case.
    
    ## v2 Compatibility
    
    If you're using SDK v2.x, note these differences:
    - **Entity IDs:** Pass `user_id` as top-level kwarg to `search()` instead of inside `filters`
    - **Defaults:** `top_k=100`, no threshold, `rerank=True`
    - **Graph memory:** Available via `enable_graph=True`
    
    See the [migration guide](https://docs.mem0.ai/migration/oss-v2-to-v3) for details.
    
    ## Live documentation search
    
    For the latest docs beyond what's in the references, use the doc search tool:
    
    ```bash
    python ${CLAUDE_SKILL_DIR}/scripts/mem0_doc_search.py --query "topic"
    python ${CLAUDE_SKILL_DIR}/scripts/mem0_doc_search.py --page "/platform/features/graph-memory"
    python ${CLAUDE_SKILL_DIR}/scripts/mem0_doc_search.py --index
    ```
    
    No API key needed β€” searches docs.mem0.ai directly.
    
    ## Client SDK References
    
    Language-specific deep references (Platform + OSS):
    
    | Language | File |
    |----------|------|
    | Python (MemoryClient + AsyncMemoryClient + Memory OSS) | [client/python.md](client/python.md) |
    | TypeScript/Node.js (MemoryClient + Memory OSS) | [client/node.md](client/node.md) |
    | Python vs TypeScript differences | [client/differences.md](client/differences.md) |
    
    ## Platform References
    
    Load these on demand for deeper detail:
    
    | Topic | File |
    |-------|------|
    | Quickstart (Python, TS, cURL) | [references/quickstart.md](references/quickstart.md) |
    | SDK guide (all methods, both languages) | [references/sdk-guide.md](references/sdk-guide.md) |
    | API reference (endpoints, filters, object schema) | [references/api-reference.md](references/api-reference.md) |
    | Architecture (pipeline, lifecycle, scoping, performance) | [references/architecture.md](references/architecture.md) |
    | Platform features (retrieval, graph, categories, MCP, etc.) | [references/features.md](references/features.md) |
    | Framework integrations (LangChain, CrewAI, OpenAI Agents, etc.) | [references/integration-patterns.md](references/integration-patterns.md) |
    | Use cases & examples (real-world patterns with code) | [references/use-cases.md](references/use-cases.md) |
    
    ## Related Mem0 Skills
    
    | Skill | When to use | Link |
    |-------|-------------|------|
    | mem0-cli | Terminal commands, scripting, CI/CD, agent tool loops | [GitHub](https://github.com/mem0ai/mem0/tree/main/skills/mem0-cli) |
    | mem0-vercel-ai-sdk | Vercel AI SDK provider with automatic memory | [GitHub](https://github.com/mem0ai/mem0/tree/main/skills/mem0-vercel-ai-sdk) |
    
  • .cursor-plugin/marketplace.jsonmarketplace
    Show content (409 bytes)
    {
      "name": "mem0-plugins",
      "owner": {
        "name": "Mem0",
        "email": "support@mem0.ai"
      },
      "metadata": {
        "description": "Official Mem0 plugins for Cursor"
      },
      "plugins": [
        {
          "name": "mem0",
          "source": "./mem0-plugin",
          "description": "Mem0 memory layer for AI applications. Add persistent memory, personalization, and semantic search.",
          "version": "0.1.1"
        }
      ]
    }
    
  • .agents/plugins/marketplace.jsonmarketplace
    Show content (361 bytes)
    {
      "name": "mem0-plugins",
      "interface": {
        "displayName": "Mem0 Plugins"
      },
      "plugins": [
        {
          "name": "mem0",
          "source": {
            "source": "local",
            "path": "./mem0-plugin"
          },
          "policy": {
            "installation": "AVAILABLE",
            "authentication": "ON_INSTALL"
          },
          "category": "Productivity"
        }
      ]
    }
    
  • .claude-plugin/marketplace.jsonmarketplace
    Show content (429 bytes)
    {
      "name": "mem0-plugins",
      "owner": {
        "name": "Mem0",
        "email": "support@mem0.ai"
      },
      "metadata": {
        "description": "Official Mem0 plugins for Claude"
      },
      "plugins": [
        {
          "name": "mem0",
          "source": "./mem0-plugin",
          "description": "Mem0 memory layer for AI applications. Add persistent memory, personalization, and semantic search to Claude workflows.",
          "version": "0.1.1"
        }
      ]
    }
    

README

Mem0 - The Memory Layer for Personalized AI

mem0ai%2Fmem0 | Trendshift

Learn more Β· Join Discord Β· Demo

Mem0 Discord Mem0 PyPI - Downloads GitHub commit activity Package version Npm package Y Combinator S24

πŸ“„ Benchmarking Mem0's token-efficient memory algorithm β†’

New Memory Algorithm (April 2026)

BenchmarkOldNewTokensLatency p50
LoCoMo71.491.67.0K0.88s
LongMemEval67.893.46.8K1.09s
BEAM (1M)β€”64.16.7K1.00s
BEAM (10M)β€”48.66.9K1.05s

All benchmarks run on the same production-representative model stack. Single-pass retrieval (one call, no agentic loops).

What changed:

  • Single-pass ADD-only extraction -- one LLM call, no UPDATE/DELETE. Memories accumulate; nothing is overwritten.
  • Agent-generated facts are first-class -- when an agent confirms an action, that information is now stored with equal weight.
  • Entity linking -- entities are extracted, embedded, and linked across memories for retrieval boosting.
  • Multi-signal retrieval -- semantic, BM25 keyword, and entity matching scored in parallel and fused.

See the migration guide for upgrade instructions. The evaluation framework is open-sourced so anyone can reproduce the numbers.

Research Highlights

  • 91.6 on LoCoMo -- +20 points over the previous algorithm
  • 93.4 on LongMemEval -- +26 points, with +53.6 on assistant memory recall
  • 64.1 on BEAM (1M) -- production-scale memory evaluation at 1M tokens
  • Read the full paper

Introduction

Mem0 ("mem-zero") enhances AI assistants and agents with an intelligent memory layer, enabling personalized AI interactions. It remembers user preferences, adapts to individual needs, and continuously learns over timeβ€”ideal for customer support chatbots, AI assistants, and autonomous systems.

Key Features & Use Cases

Core Capabilities:

  • Multi-Level Memory: Seamlessly retains User, Session, and Agent state with adaptive personalization
  • Developer-Friendly: Intuitive API, cross-platform SDKs, and a fully managed service option

Applications:

  • AI Assistants: Consistent, context-rich conversations
  • Customer Support: Recall past tickets and user history for tailored help
  • Healthcare: Track patient preferences and history for personalized care
  • Productivity & Gaming: Adaptive workflows and environments based on user behavior

πŸš€ Quickstart Guide

LibrarySelf-Hosted ServerCloud Platform
Best forTesting, prototypingTeams running on their own infrastructureZero-ops production use
Setuppip install mem0aidocker compose upSign up at app.mem0.ai
Dashboard--YesYes
Auth & API Keys--YesYes
Advanced Features--TeasersAll included

Just testing? Use the library. Building for a team? Self-hosted. Want zero ops? Cloud.

Library (pip / npm)

pip install mem0ai

For enhanced hybrid search with BM25 keyword matching and entity extraction, install with NLP support:

pip install mem0ai[nlp]
python -m spacy download en_core_web_sm

Install sdk via npm:

npm install mem0ai

Self-Hosted Server

Note: Self-hosted auth is on by default. Upgrading from a pre-auth build? Set ADMIN_API_KEY, register an admin through the wizard, or AUTH_DISABLED=true for local dev only. See upgrade notes.

# Recommended: one command β€” start the stack, create an admin, issue the first API key.
cd server && make bootstrap

# Manual: start the stack and finish setup via the browser wizard.
cd server && docker compose up -d    # http://localhost:3000

See the self-hosted docs for configuration.

Cloud Platform

  1. Sign up on Mem0 Platform
  2. Embed the memory layer via SDK or API keys

CLI

Manage memories from your terminal:

npm install -g @mem0/cli   # or: pip install mem0-cli

mem0 init
mem0 add "Prefers dark mode and vim keybindings" --user-id alice
mem0 search "What does Alice prefer?" --user-id alice

See the CLI documentation for the full command reference.

Agent Skills

Teach your AI coding assistant (Claude Code, Codex, Cursor, Windsurf, OpenCode, OpenClaw, and any tool that supports the skills standard) how to build with Mem0. Two categories:

Reference skills β€” always on (SDK knowledge loaded into the assistant's context):

npx skills add https://github.com/mem0ai/mem0 --skill mem0
npx skills add https://github.com/mem0ai/mem0 --skill mem0-cli
npx skills add https://github.com/mem0ai/mem0 --skill mem0-vercel-ai-sdk

Pipeline skills β€” run on demand (execute an end-to-end workflow in an existing repo):

npx skills add https://github.com/mem0ai/mem0 --skill mem0-integrate
npx skills add https://github.com/mem0ai/mem0 --skill mem0-test-integration

Use /mem0-integrate to wire Mem0 into an existing repo via a test-first pipeline, then /mem0-test-integration to verify. See the skills catalog or Vibecoding with Mem0 for the full picture.

Basic Usage

Mem0 requires an LLM to function, with gpt-5-mini from OpenAI as the default. However, it supports a variety of LLMs; for details, refer to our Supported LLMs documentation.

Mem0 uses text-embedding-3-small from OpenAI as the default embedding model. For best results with hybrid search (semantic + keyword + entity boosting), we recommend using at least Qwen 600M or a comparable embedding model. See Supported Embeddings for configuration details.

First step is to instantiate the memory:

from openai import OpenAI
from mem0 import Memory

openai_client = OpenAI()
memory = Memory()

def chat_with_memories(message: str, user_id: str = "default_user") -> str:
    # Retrieve relevant memories
    relevant_memories = memory.search(query=message, filters={"user_id": user_id}, top_k=3)
    memories_str = "\n".join(f"- {entry['memory']}" for entry in relevant_memories["results"])

    # Generate Assistant response
    system_prompt = f"You are a helpful AI. Answer the question based on query and memories.\nUser Memories:\n{memories_str}"
    messages = [{"role": "system", "content": system_prompt}, {"role": "user", "content": message}]
    response = openai_client.chat.completions.create(model="gpt-5-mini", messages=messages)
    assistant_response = response.choices[0].message.content

    # Create new memories from the conversation
    messages.append({"role": "assistant", "content": assistant_response})
    memory.add(messages, user_id=user_id)

    return assistant_response

def main():
    print("Chat with AI (type 'exit' to quit)")
    while True:
        user_input = input("You: ").strip()
        if user_input.lower() == 'exit':
            print("Goodbye!")
            break
        print(f"AI: {chat_with_memories(user_input)}")

if __name__ == "__main__":
    main()

For detailed integration steps, see the Quickstart and API Reference.

πŸ”— Integrations & Demos

  • ChatGPT with Memory: Personalized chat powered by Mem0 (Live Demo)
  • Browser Extension: Store memories across ChatGPT, Perplexity, and Claude (Chrome Extension)
  • Langgraph Support: Build a customer bot with Langgraph + Mem0 (Guide)
  • CrewAI Integration: Tailor CrewAI outputs with Mem0 (Example)

πŸ“š Documentation & Support

Citation

We now have a paper you can cite:

@article{mem0,
  title={Mem0: Building Production-Ready AI Agents with Scalable Long-Term Memory},
  author={Chhikara, Prateek and Khant, Dev and Aryan, Saket and Singh, Taranjeet and Yadav, Deshraj},
  journal={arXiv preprint arXiv:2504.19413},
  year={2025}
}

βš–οΈ License

Apache 2.0 β€” see the LICENSE file for details.