Curated Claude Code catalog
Updated 07.05.2026 · 19:39 CET
01 / Skill
gzoonet

cortex

Quality
9.0

GZOO Cortex is a local-first knowledge orchestrator designed for developers working across multiple projects. It automatically watches your project files (md, ts, js, json, yaml), extracts entities like decisions, patterns, and components using LLMs, and infers relationships between them. This creates a comprehensive knowledge graph that you can query in natural language to recall past decisions, avoid re-solving problems, and gain context across your entire codebase. It supports various cloud and local LLM providers (Ollama) with intelligent routing, ensures privacy by keeping restricted pro

USP

Never lose context across projects again. Cortex builds a local-first knowledge graph from your code and docs, enabling natural language queries, contradiction detection, and direct Claude Code integration for AI-assisted recall and decisi…

Use cases

  • 01Recalling architecture decisions made months ago across different repositories.
  • 02Identifying common patterns or components used in various projects.
  • 03Detecting conflicting decisions or dependencies across your codebase.
  • 04Querying your entire knowledge base in natural language to get answers with source citations.
  • 05Integrating AI agents (Claude Code) to leverage your project knowledge for development tasks.

Detected files (1)

  • CLAUDE.mdclaude_md
    Show content (9732 bytes)
    # CLAUDE.md — GZOO Cortex Project Instructions
    
    > **Read this file on every session.** It is the single source of truth for working on GZOO Cortex.
    > Detailed specs live in `/docs/` — reference them before implementing any component.
    
    ## What Is GZOO Cortex
    
    GZOO Cortex is a **local-first knowledge orchestrator** that watches your project files, extracts entities and relationships using LLMs, stores them in a knowledge graph, and lets you query your own decisions, patterns, and context via natural language CLI and web interface.
    
    **Core value prop:** You work across 5+ projects. GZOO Cortex remembers what you decided, why, and where — so you never lose context switching between projects.
    
    ## Tech Stack
    
    | Layer | Technology | Why |
    |-------|-----------|-----|
    | Runtime | Node.js 20+ with TypeScript (strict mode) | Fast I/O, native file watching |
    | Database | SQLite via better-sqlite3 (WAL mode) | Zero-config, single-file, fast |
    | Vector DB | LanceDB (embedded) | Columnar vectors, no server |
    | LLM (Cloud) | Anthropic Claude Sonnet 4.5 / Haiku 4.5 | Primary cloud provider |
    | LLM (Local) | Ollama + Mistral 7B | Local inference for hybrid/local modes |
    | File Watching | Chokidar | Battle-tested, cross-platform |
    | Parsing | tree-sitter (code), unified/remark (markdown) | AST-level extraction |
    | CLI | Commander.js + Ink (React for terminals) | Rich interactive CLI |
    | Web UI | React + Vite | Localhost dashboard |
    | Monorepo | npm workspaces | Simple, no extra tooling |
    
    ## Monorepo Structure
    
    ```
    cortex/
    ├── CLAUDE.md                    ← YOU ARE HERE
    ├── package.json                 ← root workspace config
    ├── tsconfig.base.json           ← shared TypeScript config
    ├── packages/
    │   ├── core/                    ← shared types, event bus, config, errors
    │   │   └── src/
    │   │       ├── types/           ← ALL TypeScript interfaces (see docs/types.md)
    │   │       ├── events/          ← EventBus implementation
    │   │       ├── config/          ← Config loader + Zod validation
    │   │       └── errors/          ← CortexError class + error codes
    │   ├── ingest/                  ← file watcher, parsers, chunker
    │   │   └── src/
    │   │       ├── watcher.ts       ← Chokidar wrapper
    │   │       ├── parsers/         ← markdown, typescript, json, yaml, conversation parsers
    │   │       └── chunker.ts       ← content splitting for LLM context
    │   ├── graph/                   ← SQLite store, LanceDB vectors, query engine
    │   │   └── src/
    │   │       ├── sqlite-store.ts  ← entity/relationship CRUD
    │   │       ├── vector-store.ts  ← LanceDB embedding storage
    │   │       └── query-engine.ts  ← context assembly for LLM queries
    │   ├── llm/                     ← LLM provider abstraction, prompts
    │   │   └── src/
    │   │       ├── providers/       ← anthropic.ts, ollama.ts
    │   │       ├── prompts/         ← versioned prompt templates (see docs/prompts.md)
    │   │       ├── router.ts        ← smart routing (cloud, hybrid, local-first, local-only)
    │   │       └── cache.ts         ← response caching
    │   ├── cli/                     ← all CLI commands
    │   │   └── src/
    │   │       ├── commands/        ← init, watch, query, find, status, costs, config, etc.
    │   │       └── index.ts         ← Commander.js entry point
    │   ├── mcp/                     ← MCP server (stdio transport)
    │   │   └── src/
    │   │       └── index.ts         ← 4 tools: get_status, list_projects, find_entity, query_cortex
    │   ├── server/                  ← Express API backend for web dashboard
    │   └── web/                     ← React dashboard (Vite)
    ├── docs/                        ← SPEC FILES (read before implementing)
    │   ├── types.md                 ← ALL TypeScript interfaces
    │   ├── prompts.md               ← ALL LLM prompts with schemas
    │   ├── api-contracts.md         ← REST API, WebSocket, event bus contracts
    │   ├── cli-commands.md          ← Every CLI command spec
    │   ├── config.md                ← Full config schema + defaults
    │   ├── errors.md                ← Error codes, recovery, degradation chain
    │   └── security.md              ← Privacy model, threat model, data classification
    └── tests/
        ├── unit/
        └── integration/
    ```
    
    ## Architecture Rules
    
    1. **Packages communicate via the EventBus only.** No direct imports between packages except `@cortex/core` types.
    2. **All LLM calls go through the Router** (`packages/llm/src/router.ts`). No package calls a provider directly.
    3. **Privacy check runs before every cloud API call.** See `docs/security.md` for the pre-transmission pipeline.
    4. **Every entity and relationship has a source trail.** `sourceFile`, `sourceRange`, `extractedBy` (prompt+model+version).
    5. **Errors use CortexError class** with typed codes. See `docs/errors.md` for the full registry.
    6. **Config validated with Zod schemas.** See `docs/config.md` for every field.
    
    ## Core Functionality
    
    The system delivers a complete pipeline: **ingest files, extract entities/relationships via LLM, store in knowledge graph, query via CLI or web dashboard.**
    
    ### CLI Commands
    
    - `cortex init` — Interactive setup (routing mode, API key, directories)
    - `cortex watch` — Start file watcher + ingestion pipeline (with live contradiction alerts)
    - `cortex query "<question>"` — Natural language query with citations
    - `cortex find <name>` — Direct entity lookup with relationship expansion
    - `cortex status` — System dashboard (graph stats, LLM status, costs, local provider info)
    - `cortex costs` — Detailed cost reporting
    - `cortex config` — Read/write/validate configuration
    - `cortex privacy` — Privacy classification management
    - `cortex contradictions` — View detected contradictions
    - `cortex resolve` — Resolve contradictions
    - `cortex projects` — List tracked projects
    - `cortex ingest <file>` — One-shot file ingestion (`--project`, `--dry-run`)
    - `cortex models list/pull/test/info` — Manage Ollama models
    - `cortex serve` — Start web dashboard (default port 3710)
    
    ### LLM Routing Modes
    
    | Mode | Behavior |
    |------|----------|
    | `cloud-first` | All tasks go to Anthropic API |
    | `hybrid` | Entity extraction via Ollama, reasoning tasks via cloud |
    | `local-first` | Prefer Ollama, escalate to cloud if confidence < 0.6 |
    | `local-only` | All tasks use Ollama, no cloud calls ever |
    
    Task routing:
    - Entity extraction → Ollama (hybrid/local modes) or Claude Haiku (cloud)
    - Relationship inference → Claude Sonnet (reasoning-heavy)
    - Contradiction detection → Claude Sonnet (except restricted projects → Ollama)
    - Context ranking → Ollama (local preferred)
    - Conversational queries → Claude Sonnet (streaming)
    - Embeddings → local via LanceDB built-in or Ollama nomic-embed-text
    - Budget exhausted → all tasks auto-route to Ollama
    
    ### MCP Server
    
    The MCP server (`packages/mcp`) exposes 4 tools via stdio transport for use with Claude Code and other MCP clients:
    - `get_status` — System status
    - `list_projects` — Tracked projects
    - `find_entity` — Entity lookup
    - `query_cortex` — Natural language query
    
    ### Web Dashboard
    
    `cortex serve` starts an Express API + React SPA at `localhost:3710` with 5 views:
    - Dashboard Home — overview and stats
    - Knowledge Graph — D3-force visualization with smart clustering
    - Live Feed — real-time events via WebSocket
    - Query Explorer — natural language queries in the browser
    - Contradictions — view and manage detected contradictions
    
    ## Coding Standards
    
    - **TypeScript strict mode.** No `any` types. No `ts-ignore`.
    - **Zod for all external data validation** (config files, LLM responses, API inputs).
    - **No classes for data.** Use interfaces + plain objects. Classes only for services (EventBus, SQLiteStore, etc.).
    - **Async/await everywhere.** No callbacks. No `.then()` chains.
    - **Error handling:** Wrap external calls in try/catch. Throw `CortexError` with typed codes. Never throw raw `Error`.
    - **Logging:** Use the structured logger from `@cortex/core`. Never `console.log` in production code.
    - **Tests:** Unit tests for parsers, prompt output validation, config validation. Integration tests for the full ingest-extract-store pipeline.
    - **No lodash/underscore.** Use native Array methods. Keep dependencies minimal.
    - **File size limit:** No single file over 400 lines. Split into focused modules.
    
    ## Key Spec Files (Read Before Coding)
    
    | Before building... | Read this spec |
    |---|---|
    | Any TypeScript interface | `docs/types.md` |
    | Any LLM prompt or extraction | `docs/prompts.md` |
    | Any CLI command | `docs/cli-commands.md` |
    | Config loading or validation | `docs/config.md` |
    | Error handling or recovery | `docs/errors.md` |
    | Privacy checks or API calls | `docs/security.md` |
    | REST API or WebSocket | `docs/api-contracts.md` |
    
    ## Quick Reference: Entity Types
    
    `Decision`, `Requirement`, `Pattern`, `Component`, `Dependency`, `Interface`, `Constraint`, `ActionItem`, `Risk`, `Note`
    
    ## Quick Reference: Relationship Types
    
    `depends_on`, `implements`, `contradicts`, `evolved_from`, `relates_to`, `uses`, `constrains`, `resolves`, `documents`, `derived_from`
    
    ## Quick Reference: Error Code Format
    
    `LAYER_CATEGORY_DETAIL` — e.g., `LLM_PROVIDER_UNAVAILABLE`, `INGEST_PARSE_FAILED`, `GRAPH_DB_ERROR`
    
    Layers: `INGEST`, `GRAPH`, `LLM`, `INTERFACE`, `CONFIG`, `PRIVACY`
    

README

GZOO Cortex

GZOO Cortex — Local-first knowledge graph for developers

Local-first knowledge graph for developers. Watches your project files, extracts entities and relationships using LLMs, and lets you query across all your projects in natural language.

“What architecture decisions have I made across projects?”

Cortex finds decisions from your READMEs, TypeScript files, config files, and conversation exports — then synthesizes an answer with source citations.

Why

You work on multiple projects. Decisions, patterns, and context are scattered across hundreds of files. You forget what you decided three months ago. You re-solve problems you already solved in another repo.

Cortex watches your project directories, extracts knowledge automatically, and gives it back to you when you need it.

What It Does

  • Watches your project files (md, ts, js, json, yaml) for changes
  • Extracts entities: decisions, patterns, components, dependencies, constraints, action items
  • Infers relationships between entities across projects
  • Detects contradictions when decisions conflict
  • Queries in natural language with source citations
  • Routes intelligently between cloud and local LLMs
  • Respects privacy — restricted projects never leave your machine
  • Web dashboard with knowledge graph visualization, live feed, and query explorer
  • MCP server for direct integration with Claude Code

Quick Start

1. Install

npm install -g @gzoo/cortex

Or install from source:

git clone https://github.com/gzoonet/cortex.git
cd cortex
npm install && npm run build && npm link

2. Setup

Run the interactive wizard:

cortex init

This walks you through:

  • LLM provider — Anthropic, Google Gemini, DeepSeek, Groq, OpenRouter, or Ollama (local)
  • API key — saved securely to ~/.cortex/.env
  • Routing mode — cloud-first, hybrid, local-first, or local-only
  • Watch directories — which directories Cortex should monitor
  • Budget limit — monthly LLM spend cap

Config is stored at ~/.cortex/cortex.config.json. API keys go in ~/.cortex/.env.

3. Register Projects

cortex projects add my-app ~/projects/app
cortex projects add api ~/projects/api
cortex projects list                       # verify

4. Watch & Query

cortex watch                               # start watching for changes
cortex query "what caching strategies am I using?"
cortex query "what decisions have I made about authentication?"
cortex find "PostgreSQL" --expand 2
cortex contradictions

5. Web Dashboard

cortex serve                               # open http://localhost:3710

Excluding Files & Directories

Cortex ignores node_modules, dist, .git, and other common directories by default. To add more:

cortex config exclude add docs             # exclude a directory
cortex config exclude add "*.log"          # exclude by pattern
cortex config exclude list                 # see all excludes
cortex config exclude remove docs          # remove an exclude

How It Works

Cortex runs a pipeline on every file change:

  1. Parse — file content is chunked by a language-aware parser (tree-sitter for code, remark for markdown)
  2. Extract — LLM identifies entities (decisions, components, patterns, etc.)
  3. Relate — LLM infers relationships between new and existing entities
  4. Detect — contradictions and duplicates are flagged automatically
  5. Store — entities, relationships, and vectors go into SQLite + LanceDB
  6. Query — natural language queries search the graph and synthesize answers

All data stays local in ~/.cortex/. Only LLM API calls leave your machine (and never for restricted projects).

LLM Providers

Cortex is provider-agnostic. It supports:

  • Anthropic Claude (Sonnet, Haiku) — via native Anthropic API
  • Google Gemini — via OpenAI-compatible API
  • DeepSeek (Reasoner, Chat) — strong reasoning, very affordable
  • Groq — fast inference with free tier
  • Any OpenAI-compatible API — OpenRouter, local proxies, etc.
  • Ollama (Mistral, Llama, etc.) — fully local, no cloud required

Routing Modes

ModeCloud CostQualityGPU Required
cloud-firstVaries by providerHighestNo
hybridReducedHighYes (Ollama)
local-firstMinimalGoodYes (Ollama)
local-only$0GoodYes (Ollama)

Hybrid mode routes high-volume tasks (entity extraction, ranking) to Ollama and reasoning-heavy tasks (relationship inference, queries) to your cloud provider.

Requirements

  • Node.js 20+
  • LLM API key for cloud modes — Anthropic, Google Gemini, DeepSeek, Groq, or any OpenAI-compatible provider
  • Ollama (for hybrid/local modes) — install

Configuration

All config lives in ~/.cortex/cortex.config.json. API keys are in ~/.cortex/.env.

cortex config list                       # see all non-default settings
cortex config set llm.mode hybrid        # switch routing mode
cortex config set llm.budget.monthlyLimitUsd 10  # set budget
cortex config exclude add vendor         # exclude a directory from watching
cortex privacy set ~/clients restricted  # mark directory as restricted

Full configuration reference: docs/configuration.md

Commands

CommandDescription
cortex initInteractive setup wizard
cortex projects add <name> [path]Register a project directory
cortex projects listList registered projects
cortex projects remove <name>Unregister a project
cortex projects show <name>Show project details
cortex watch [project]Start watching for file changes
cortex stopStop a running watch process
cortex query <question>Natural language query with citations
cortex find <term>Find entities by name
cortex ingest <file-or-glob>One-shot file ingestion
cortex statusGraph stats, costs, provider status
cortex costsDetailed cost breakdown
cortex contradictionsList active contradictions
cortex resolve <id>Resolve a contradiction
cortex models list/pull/test/infoManage Ollama models
cortex serveStart web dashboard (localhost:3710)
cortex mcpStart MCP server for Claude Code
cortex reportPost-ingestion summary
cortex privacy set <dir> <level>Set directory privacy
cortex config list/get/setRead/write configuration
cortex config exclude add/remove/listManage file/directory exclusions
cortex dbDatabase operations

Full CLI reference: docs/cli-reference.md

Web Dashboard

Run cortex serve to open a full web dashboard at http://localhost:3710 with:

  • Dashboard Home — graph stats, recent activity, entity type breakdown
  • Knowledge Graph — interactive D3-force graph with clustering, click to explore
  • Live Feed — real-time file change and entity extraction events via WebSocket
  • Query Explorer — natural language queries with streaming responses
  • Contradiction Resolver — review and resolve conflicting decisions

MCP Server (Claude Code Integration)

Cortex includes an MCP server so Claude Code can query your knowledge graph directly:

claude mcp add cortex --scope user -- npx @gzoo/cortex mcp

This gives Claude Code 12 tools:

ToolDescription
cortex_askNatural language questions about your projects
get_statusSystem status and graph stats
list_projectsList registered projects
find_entityLook up entities by name
query_cortexStructured knowledge graph queries
get_contradictionsList detected contradictions
resolve_contradictionResolve a contradiction
search_entitiesSearch entities with filters
ingest_fileTrigger file ingestion
add_projectRegister a new project
remove_projectUnregister a project
session_briefContext summary for current session

Architecture

Monorepo with eight packages:

  • @cortex/core — types, EventBus, config loader, error classes
  • @cortex/ingest — file parsers (tree-sitter + remark), chunker, watcher, pipeline
  • @cortex/graph — SQLite store, LanceDB vectors, query engine
  • @cortex/llm — Anthropic/Gemini/OpenAI-compatible/Ollama providers, router, prompts, cache
  • @cortex/cli — Commander.js CLI with 18 commands
  • @cortex/mcp — Model Context Protocol server (stdio transport, 12 tools)
  • @cortex/server — Express REST API + WebSocket relay
  • @cortex/web — React + Vite + D3 web dashboard

Architecture docs: docs/

Privacy & Security

  • Files classified as restricted are never sent to cloud LLMs
  • Sensitive files (.env, .pem, .key) are auto-detected and blocked
  • API key secrets are scanned and redacted before any cloud transmission
  • All data stored locally in ~/.cortex/ — nothing phones home

Full security architecture: docs/security.md

Built With

  • SQLite via better-sqlite3 — entity and relationship storage
  • LanceDB — vector embeddings for semantic search
  • Anthropic Claude — cloud LLM provider
  • Google Gemini — cloud LLM provider (via OpenAI-compatible API)
  • DeepSeek — cloud LLM provider (reasoning + chat)
  • Groq — fast cloud inference
  • Ollama — local LLM inference
  • tree-sitter — language-aware file parsing
  • Chokidar — cross-platform file watching
  • Commander.js — CLI framework
  • React + Vite — web dashboard
  • D3 — knowledge graph visualization

Contributing

See CONTRIBUTING.md for guidelines.

License

MIT — see LICENSE

About

Built by GZOO — an AI-powered business automation platform.

Cortex started as an internal tool to maintain context across multiple client projects. We open-sourced it because every developer who works on more than one thing loses context, and we think this approach — automatic file watching + knowledge graph + natural language queries — is the right way to solve it.