USP
It uniquely combines an MCP server and shell hook with graph-powered intelligence, offering adaptive file reading modes and pattern-based CLI output compression for 60-99% token savings. Its cross-session memory and PR context packs furtheβ¦
Use cases
- 01Reducing token costs for AI coding agents
- 02Compressing verbose CLI output (git, npm, docker)
- 03Optimizing file reads for LLMs (map, signatures, diff modes)
- 04Building PR-ready context packs
- 05Persisting AI agent session memory
Detected files (5)
rust/src/templates/SKILL.mdskillShow content (7575 bytes)
--- name: lean-ctx description: Context Runtime for AI Agents β 56 MCP tools, 10 read modes, 95+ shell patterns, tree-sitter AST for 18 languages. Compresses LLM context by up to 99%. Use when reading files, running shell commands, searching code, or exploring directories. Auto-installs if not present. --- # LeanCTX β Context Runtime for AI Agents LeanCTX optimizes LLM context through 56 MCP tools, 95+ shell compression patterns, and tree-sitter AST parsing for 18 languages. It provides adaptive file reading, cross-session memory (CCP), task-conditioned relevance scoring, and a feedback loop for learning optimal compression. ## Setup (run first) Before using lean-ctx, verify it is installed: ```bash which lean-ctx || bash scripts/install.sh ``` If the install script is not available locally, install manually: ```bash curl -fsSL https://raw.githubusercontent.com/yvgude/lean-ctx/main/skills/lean-ctx/scripts/install.sh | bash ``` After installation, run the one-command setup (installs shell hook + editor wiring + rules + skills): ```bash lean-ctx setup ``` lean-ctx supports two integration styles: - **CLI-redirect (preferred when shell access exists)**: no MCP tool schema overhead. - **MCP (required for some IDE extensions)**: cached reads + tools via MCP. ## When to use lean-ctx Always prefer `lean-ctx -c <command>` over running commands directly when: - The command produces verbose output (build logs, git diffs, dependency trees, test results) - You are reading files and only need the structure or API surface - You want to check token savings for the current session ## Shell commands (use instead of raw exec) ```bash lean-ctx -c git status # Compressed git output lean-ctx -c git diff # Only meaningful diff lines lean-ctx -c git log --oneline -10 lean-ctx -c npm install # Strips progress bars, noise lean-ctx -c cargo build lean-ctx -c cargo test lean-ctx -c docker ps lean-ctx -c kubectl get pods lean-ctx -c aws ec2 describe-instances lean-ctx -c helm list lean-ctx -c prisma migrate dev lean-ctx -c curl -s <url> # JSON schema extraction lean-ctx -c ls -la <dir> # Grouped directory listing ``` Supported: git, npm, pnpm, yarn, bun, deno, cargo, docker, kubectl, helm, gh, pip, ruff, go, eslint, prettier, tsc, aws, psql, mysql, prisma, swift, zig, cmake, ansible, composer, mix, bazel, systemd, terraform, make, maven, dotnet, flutter, poetry, rubocop, playwright, curl, wget, and more. ## File reading (compressed modes) ```bash lean-ctx read <file> # Full content with structured header lean-ctx read <file> -m map # Dependency graph + exports + API (~5-15% tokens) lean-ctx read <file> -m signatures # Function/class signatures only (~10-20% tokens) lean-ctx read <file> -m aggressive # Syntax-stripped (~30-50% tokens) lean-ctx read <file> -m entropy # Shannon entropy filtered (~20-40% tokens) lean-ctx read <file> -m diff # Only changed lines since last read ``` Use `map` mode when you need to understand what a file does without reading every line. Use `signatures` mode when you need the API surface of a module (tree-sitter for 18 languages). Use `full` mode only when you will edit the file. ## AI Tool Integration ```bash lean-ctx init --global # Install shell aliases lean-ctx init --agent cursor --mode cli-redirect # CLI-first (no MCP schema overhead) lean-ctx init --agent claude --mode cli-redirect # CLI-first (Claude Code) lean-ctx init --agent codex --mode cli-redirect # CLI-first (Codex) lean-ctx init --agent opencode --mode cli-redirect # CLI-first (OpenCode) lean-ctx init --agent copilot # MCP (VS Code / Copilot) lean-ctx init --agent jetbrains # MCP (JetBrains) lean-ctx init --agent windsurf # MCP/Hybrid (Windsurf) ``` ## Multi-Agent & Knowledge (v2.7.0+) CLI (works in CLI-redirect and MCP setups): ```bash lean-ctx knowledge remember "value" --category <c> --key <k> lean-ctx knowledge recall "query" lean-ctx knowledge search "query" lean-ctx session task "what you're doing" lean-ctx session finding "what you found" lean-ctx session decision "what you decided" lean-ctx session save ``` If MCP is enabled for your IDE, the same capabilities are also available as MCP tools (`ctx_knowledge`, `ctx_session`, `ctx_agent`, ...). ## Additional Intelligence Tools (v2.19.0) - `ctx_edit(path, old_string, new_string)` β search-and-replace file editing without native Read/Edit - `ctx_overview(task)` β task-relevant project map at session start - `ctx_preload(task)` β proactive context loader, caches task-relevant files - `ctx_semantic_search(query)` β BM25 code search by meaning across the project - `ctx_intent` now supports multi-intent detection and complexity classification - Semantic cache: TF-IDF + cosine similarity for finding similar files across reads ## Session Continuity (CCP) ```bash lean-ctx sessions list # List all CCP sessions lean-ctx sessions show # Show latest session state lean-ctx wrapped # Weekly savings report card lean-ctx wrapped --month # Monthly savings report card lean-ctx benchmark run # Real project benchmark (terminal output) lean-ctx benchmark run --json # Machine-readable JSON output lean-ctx benchmark report # Shareable Markdown report ``` MCP tools for CCP: - `ctx_session status` β show current session state (~400 tokens) - `ctx_session load` β restore previous session (cross-chat memory) - `ctx_session task "description"` β set current task - `ctx_session finding "file:line β summary"` β record key finding - `ctx_session decision "summary"` β record architectural decision - `ctx_session save` β force persist session to disk - `ctx_session role` β list/switch agent roles (governance) - `ctx_session budget` β show budget status vs role limits - `ctx_session slo` β show SLO status/violations (value=reload|history|clear) - `ctx_session diff` β compare two sessions (value="<id_a> <id_b> [json]") - `ctx_session verify` β show output verification statistics - `ctx_session episodes` β episodic memory (value=record | "search <q>" | "file <path>" | "outcome <label>") - `ctx_session procedures` β procedural memory (value=detect | "suggest <task>") - `ctx_intent` β intent classification + model routing (returns dimension/tier/reasoning) - `ctx_graph build` β index code into unified graph - `ctx_graph related` β find connected files via graph - `ctx_graph symbol` β lookup symbol definitions/usages - `ctx_graph impact` β blast radius analysis - `ctx_graph enrich` β add commits, tests, knowledge to graph - `ctx_graph context` β task-based graph query for relevant context - `ctx_wrapped` β generate savings report card in chat ## Analytics ```bash lean-ctx gain # Visual token savings dashboard lean-ctx dashboard # Web dashboard at localhost:3333 lean-ctx session # Adoption statistics lean-ctx discover # Find uncompressed commands in shell history ``` ## Tips - The output suffix `[lean-ctx: 5029β197 tok, -96%]` shows original vs compressed token count - For large outputs, lean-ctx automatically truncates while preserving relevant context - JSON responses from curl/wget are reduced to schema outlines - Build errors are grouped by type with counts - Test results show only failures with summary counts - Cached re-reads cost only ~13 tokensskills/lean-ctx/SKILL.mdskillShow content (6596 bytes)
--- name: lean-ctx description: Context Runtime for AI Agents β 56 MCP tools, 10 read modes, 95+ shell patterns, tree-sitter AST for 18 languages. Compresses LLM context by up to 99%. Use when reading files, running shell commands, searching code, or exploring directories. Auto-installs if not present. --- # LeanCTX β Context Runtime for AI Agents LeanCTX optimizes LLM context through 56 MCP tools, 95+ shell compression patterns, and tree-sitter AST parsing for 18 languages. It provides adaptive file reading, cross-session memory (CCP), task-conditioned relevance scoring, and a feedback loop for learning optimal compression. ## Setup (run first) Before using lean-ctx, verify it is installed: ```bash which lean-ctx || bash scripts/install.sh ``` If the install script is not available locally, install manually: ```bash curl -fsSL https://raw.githubusercontent.com/yvgude/lean-ctx/main/skills/lean-ctx/scripts/install.sh | bash ``` After installation, run the one-command setup (installs shell hook + editor wiring + rules + skills): ```bash lean-ctx setup ``` lean-ctx supports two integration styles: - **CLI-redirect (preferred when shell access exists)**: no MCP tool schema overhead. - **MCP (required for some IDE extensions)**: cached reads + tools via MCP. ## When to use lean-ctx Always prefer `lean-ctx -c <command>` over running commands directly when: - The command produces verbose output (build logs, git diffs, dependency trees, test results) - You are reading files and only need the structure or API surface - You want to check token savings for the current session ## Shell commands (use instead of raw exec) ```bash lean-ctx -c git status # Compressed git output lean-ctx -c git diff # Only meaningful diff lines lean-ctx -c git log --oneline -10 lean-ctx -c npm install # Strips progress bars, noise lean-ctx -c cargo build lean-ctx -c cargo test lean-ctx -c docker ps lean-ctx -c kubectl get pods lean-ctx -c aws ec2 describe-instances lean-ctx -c helm list lean-ctx -c prisma migrate dev lean-ctx -c curl -s <url> # JSON schema extraction lean-ctx -c ls -la <dir> # Grouped directory listing ``` Supported: git, npm, pnpm, yarn, bun, deno, cargo, docker, kubectl, helm, gh, pip, ruff, go, eslint, prettier, tsc, aws, psql, mysql, prisma, swift, zig, cmake, ansible, composer, mix, bazel, systemd, terraform, make, maven, dotnet, flutter, poetry, rubocop, playwright, curl, wget, and more. ## File reading (compressed modes) ```bash lean-ctx read <file> # Full content with structured header lean-ctx read <file> -m map # Dependency graph + exports + API (~5-15% tokens) lean-ctx read <file> -m signatures # Function/class signatures only (~10-20% tokens) lean-ctx read <file> -m aggressive # Syntax-stripped (~30-50% tokens) lean-ctx read <file> -m entropy # Shannon entropy filtered (~20-40% tokens) lean-ctx read <file> -m diff # Only changed lines since last read ``` Use `map` mode when you need to understand what a file does without reading every line. Use `signatures` mode when you need the API surface of a module (tree-sitter for 18 languages). Use `full` mode only when you will edit the file. ## AI Tool Integration ```bash lean-ctx init --global # Install shell aliases lean-ctx init --agent cursor --mode cli-redirect # CLI-first (no MCP schema overhead) lean-ctx init --agent claude --mode cli-redirect # CLI-first (Claude Code) lean-ctx init --agent codex --mode cli-redirect # CLI-first (Codex) lean-ctx init --agent opencode --mode cli-redirect # CLI-first (OpenCode) lean-ctx init --agent copilot # MCP (VS Code / Copilot) lean-ctx init --agent jetbrains # MCP (JetBrains) lean-ctx init --agent windsurf # MCP/Hybrid (Windsurf) ``` ## Multi-Agent & Knowledge (v2.7.0+) CLI (works in CLI-redirect and MCP setups): ```bash lean-ctx knowledge remember "value" --category <c> --key <k> lean-ctx knowledge recall "query" lean-ctx knowledge search "query" lean-ctx session task "what you're doing" lean-ctx session finding "what you found" lean-ctx session decision "what you decided" lean-ctx session save ``` If MCP is enabled for your IDE, the same capabilities are also available as MCP tools (`ctx_knowledge`, `ctx_session`, `ctx_agent`, ...). ## Additional Intelligence Tools (v2.19.0) - `ctx_edit(path, old_string, new_string)` β search-and-replace file editing without native Read/Edit - `ctx_overview(task)` β task-relevant project map at session start - `ctx_preload(task)` β proactive context loader, caches task-relevant files - `ctx_semantic_search(query)` β BM25 code search by meaning across the project - `ctx_intent` now supports multi-intent detection and complexity classification - Semantic cache: TF-IDF + cosine similarity for finding similar files across reads ## Session Continuity (CCP) ```bash lean-ctx sessions list # List all CCP sessions lean-ctx sessions show # Show latest session state lean-ctx wrapped # Weekly savings report card lean-ctx wrapped --month # Monthly savings report card lean-ctx benchmark run # Real project benchmark (terminal output) lean-ctx benchmark run --json # Machine-readable JSON output lean-ctx benchmark report # Shareable Markdown report ``` MCP tools for CCP: - `ctx_session status` β show current session state (~400 tokens) - `ctx_session load` β restore previous session (cross-chat memory) - `ctx_session task "description"` β set current task - `ctx_session finding "file:line β summary"` β record key finding - `ctx_session decision "summary"` β record architectural decision - `ctx_session save` β force persist session to disk - `ctx_wrapped` β generate savings report card in chat ## Analytics ```bash lean-ctx gain # Visual token savings dashboard lean-ctx dashboard # Web dashboard at localhost:3333 lean-ctx session # Adoption statistics lean-ctx discover # Find uncompressed commands in shell history ``` ## Tips - The output suffix `[lean-ctx: 5029β197 tok, -96%]` shows original vs compressed token count - For large outputs, lean-ctx automatically truncates while preserving relevant context - JSON responses from curl/wget are reduced to schema outlines - Build errors are grouped by type with counts - Test results show only failures with summary counts - Cached re-reads cost only ~13 tokens.github/copilot/mcp.jsonmcp_serverShow content (75 bytes)
{ "servers": { "lean-ctx": { "command": "lean-ctx" } } }.vscode/mcp.jsonmcp_serverShow content (239 bytes)
{ "servers": { "lean-ctx": { "args": [], "command": "lean-ctx", "env": { "LEAN_CTX_DATA_DIR": "/var/folders/bb/cyj_m1m12kn6vlhn45fw70w80000gn/T/tmp.crc7n5WisU/data" }, "type": "stdio" } } }rust/.vscode/mcp.jsonmcp_serverShow content (198 bytes)
{ "servers": { "lean-ctx": { "args": [], "command": "lean-ctx", "env": { "LEAN_CTX_DATA_DIR": "/Users/yvesgugger/.lean-ctx" }, "type": "stdio" } } }
README
βββ ββββββββ ββββββ ββββ βββ βββββββββββββββββββ βββ
βββ βββββββββββββββββββββ βββ βββββββββββββββββββββββββ
βββ ββββββ ββββββββββββββ βββ βββ βββ ββββββ
βββ ββββββ ββββββββββββββββββ βββ βββ ββββββ
βββββββββββββββββββ ββββββ ββββββ ββββββββ βββ ββββ βββ
βββββββββββββββββββ ββββββ βββββ βββββββ βββ βββ βββ
Context Runtime for AI Agents
The context layer for AI coding agents
Reduce token waste in Cursor, Claude Code, Copilot, Windsurf, Codex, Gemini & more by 60β95% (up to 99% on cached reads)
Shell Hook + MCP Server Β· 56 tools Β· 10 read modes Β· 95+ patterns Β· Single Rust binary
Website Β· Docs Β· Install Β· Demo Β· Benchmarks Β· Cookbook Β· Security Β· Changelog Β· Discord
lean-ctx is a local-first context runtime that compresses file reads + shell output before they reach the LLM. Cached re-reads drop to ~13 tokens.
See it in action:
Read + Shell Map-mode reads + compressed CLI output |
Gain (live) Tokens + USD savings in real time |
Benchmark proof Measure compression by language + mode |
All GIFs are generated from reproducible VHS tapes in demo/.
What it does
- File reads (MCP): cached + mode-aware reads (
full,map,signatures,diff, β¦) with graph-aware related files hints - Shell output (hook): compresses noisy CLI output via 95+ patterns (git, npm, cargo, docker, β¦)
- Graph-Powered Intelligence: multi-edge Property Graph (imports, calls, exports, type_ref) with weighted impact analysis, hybrid search (BM25 + embeddings + graph proximity via RRF), and incremental git-diff updates
- PR Context Packs:
lean-ctx pack --prbuilds a PR-ready context pack (changed files, related tests, impact, artifacts) - Context Packages:
lean-ctx pack createbundles Knowledge + Graph + Session + Gotchas into portable.lctxpkgfiles β share context across projects/teams with SHA-256 integrity, auto-load on session start, and smart merge (dedup facts, overlay graph) - Session memory (CCP): persist task/facts/decisions across chats with structured recovery queries surviving compaction
- HTTP mode:
lean-ctx servefor Streamable HTTP MCP +/v1/tools/call(used by the Cookbook + SDK)
How it works (30 seconds)
AI tool β (MCP tools + shell commands) β lean-ctx β your repo + CLI
- MCP server: exposes
ctx_*tools (read modes, caching, deltas, search, memory, multi-agent) - Shell hook: transparently compresses common commands so the LLM sees less noise
- Property Graph: multi-edge code graph powers impact analysis, related file discovery, and search ranking
- CCP: persists session state with structured recovery queries so long-running work doesnβt βcold startβ every chat
Get started (60 seconds)
# 1) Install (pick one)
curl -fsSL https://leanctx.com/install.sh | sh # universal (no Rust needed)
brew tap yvgude/lean-ctx && brew install lean-ctx # macOS / Linux
npm install -g lean-ctx-bin # Node.js
cargo install lean-ctx # Rust
pi install npm:pi-lean-ctx # Pi Coding Agent
# 2) Setup (shell + auto-detected AI tools)
lean-ctx setup
# 3) Verify
lean-ctx doctor
# 4) See the payoff
lean-ctx gain --live
lean-ctx wrapped --week
After setup, restart your shell and your editor/AI tool once so the MCP + hooks are active.
Troubleshooting / Safety
- Disable immediately (current shell):
lean-ctx-off - Run a single command uncompressed:
lean-ctx -c --raw "git status" - Update:
lean-ctx update - Diagnose (shareable):
lean-ctx doctor --json
Supported IDEs & AI tools
lean-ctx is a standard MCP server, so it works with any MCP-compatible client. Three integration modes are auto-selected per agent:
| Mode | How it works | Best for |
|---|---|---|
| CLI-Redirect | Agent calls lean-ctx directly via shell β zero MCP schema overhead | Agents with shell access |
| Hybrid | MCP for cached reads (13 tokens), CLI for shell + search | Mixed environments |
| Full MCP | All 56 tools via MCP protocol | Protocol-only agents |
Agent compatibility matrix
| Agent | CLI | Hybrid | MCP | Setup |
|---|---|---|---|---|
| Cursor | β | lean-ctx init --agent cursor | ||
| Codex CLI | β | lean-ctx init --agent codex | ||
| Gemini CLI | β | lean-ctx init --agent gemini | ||
| Claude Code | β | lean-ctx init --agent claude | ||
| CRUSH | β | lean-ctx init --agent crush | ||
| Hermes | β | lean-ctx init --agent hermes | ||
| OpenCode | β | lean-ctx init --agent opencode | ||
| Pi | β | lean-ctx init --agent pi | ||
| Qoder | β | lean-ctx init --agent qoder | ||
| Windsurf | β | lean-ctx init --agent windsurf | ||
| GitHub Copilot | β | lean-ctx init --agent copilot | ||
| Amp | β | lean-ctx init --agent amp | ||
| Cline | β | lean-ctx init --agent cline | ||
| Roo Code | β | lean-ctx init --agent roo | ||
| Kiro | β | lean-ctx init --agent kiro | ||
| Antigravity | β | lean-ctx init --agent antigravity | ||
| Amazon Q | β | lean-ctx init --agent amazonq | ||
| Qwen | β | lean-ctx init --agent qwen | ||
| Trae | β | lean-ctx init --agent trae | ||
| Verdent | β | lean-ctx init --agent verdent | ||
| JetBrains IDEs | β | lean-ctx init --agent jetbrains | ||
| QoderWork | β | lean-ctx init --agent qoderwork | ||
| VS Code | β | lean-ctx init --agent vscode | ||
| Zed | β | lean-ctx init --agent zed | ||
| Neovim | β | lean-ctx init --agent neovim | ||
| Emacs | β | lean-ctx init --agent emacs | ||
| Sublime Text | β | lean-ctx init --agent sublime |
Any MCP-compatible client works out of the box β the table above shows agents with first-class auto-setup.
When to use (and when not to)
Great fit if youβ¦
- use AI coding tools daily and your sessions are shell-heavy (git/tests/builds)
- work in medium/large repos (50+ files / monorepos)
- want a local-first layer with no telemetry by default
Skip it if youβ¦
- mostly work in tiny repos and rarely call the shell from your AI tool
- always need raw/unfiltered logs (you can still use
--raw, but ROI is lower)
Demo
Try these in any repo:
lean-ctx read rust/src/server/mod.rs -m map
lean-ctx -c "git log -n 5 --oneline"
lean-ctx gain --live
lean-ctx benchmark report .
- The repo ships the exact tapes used to render the GIFs in
demo/ - Regenerate locally:
vhs demo/leanctx.tape
vhs demo/gain.tape
vhs demo/benchmark.tape
Benchmarks
- Latest snapshot: BENCHMARKS.md
- Reproduce:
lean-ctx benchmark report .
Docs
- Getting started: https://leanctx.com/docs/getting-started
- Tools reference: https://leanctx.com/docs/tools/
- CLI reference: https://leanctx.com/docs/cli-reference/
- FAQ: discord-faq.md
- Feature catalog (SSOT snapshot): LEANCTX_FEATURE_CATALOG.md
- Architecture: ARCHITECTURE.md
- Vision: VISION.md
Privacy & security
- No telemetry by default
- Optional anonymous stats sharing (opt-in during setup)
- Disableable update check (config
update_check_disabled = trueorLEAN_CTX_NO_UPDATE_CHECK=1) - Runs locally; your code never leaves your machine unless you explicitly enable cloud sync
See SECURITY.md.
Uninstall
lean-ctx-off # disable immediately (current shell session)
lean-ctx uninstall # remove hooks + editor configs + data dir
# Remove the binary (pick your install method)
brew uninstall lean-ctx
npm uninstall -g lean-ctx-bin
cargo uninstall lean-ctx
pi uninstall npm:pi-lean-ctx # Pi Coding Agent
Contributing
Start with CONTRIBUTING.md. Easy first PR: propose a new CLI compression pattern via the issue template.
License
Apache License 2.0 β see LICENSE.
Portions of this software were originally released under the MIT License. See LICENSE-MIT and NOTICE.