Your AI coding history, indexed across every tool
xtctx ingests conversations from Claude Code, Cursor, Copilot, Codex, and Gemini, indexes them locally with hybrid search (BM25 + embeddings), and exposes recall and writeback over MCP. Switch tools mid-project without re-briefing the model.
What it actually does
Five tools, one local index, one continuity policy. Full README →
Cross-tool recall via MCP
xtctx serve runs an MCP server over stdio. Your assistant calls xtctx_search and xtctx_project_knowledge at session start to recall what you already decided, debugged, and shipped — regardless of which tool you used last week.
Hybrid search (BM25 + vector)
Conversations and structured knowledge land in LanceDB. Queries fuse full-text and embedding similarity via Reciprocal Rank Fusion, with hybrid, semantic, and keyword modes selectable per call. The same pipeline backs the MCP recall tools and the runtime web UI search bar.
Continuity policy across five tools
One shared.yaml declares the context feed, skills, commands, agents, MCP servers, slash commands, and whitelist policy you want present in every tool. xtctx sync renders that into Claude Code, Cursor, Copilot, Codex, and Gemini in their native formats. xtctx serve auto-reconciles drift on a timer.
Local-first, no SaaS
Everything lives under .xtctx/ in your repo and ~/.xtctx/ for the global baseline. Embeddings run in-process via @xenova/transformers; the index is LanceDB on disk. No telemetry, no cloud calls, no account. Your conversation history never leaves your machine.
Get started
Install, init, then serve. Quick start →
xtctx CLI on your PATH..xtctx/, renders continuity blocks into your AI tools, then starts the MCP + API + runtime UI.node dist/src/cli/index.js serve.Or read the full README for hooks, policy merging, and MCP client config.
Try it
FAQ
The questions people actually ask before installing.
Does it call out to any cloud service?
@xenova/transformers, the search index is LanceDB on disk, and conversation history is read from the AI tools' own local storage. There is no telemetry, no account, and no outbound network calls during normal operation. The only network access is the GitHub stars/downloads counter on this landing page itself.Which tools are supported?
CLAUDE.md, .cursor/rules, MCP server config, etc.).How does it handle drift in tool storage formats?
tests/drift/ snapshots the parser output against fixtures from each tool, so a format change shows up as a failing test on the next CI run. A nightly canary workflow runs the live CLIs (Claude Code, Codex, Gemini) end-to-end against current versions to catch breakage that fixtures alone would miss. Cursor and Copilot are GUI tools so the canary covers three of five; the mutation suite covers all five.What about Cursor and Copilot — those are GUI tools, not CLIs?
How is xtctx different from just having a long context window?
What does the MCP integration look like in practice?
mcpServers.xtctx with command: "xtctx" and args: ["serve"]), and the assistant gets recall and writeback tools: xtctx_search, xtctx_project_knowledge, xtctx_recent_sessions for reading; xtctx_save_decision, xtctx_save_error_solution, xtctx_save_faq for writing. xtctx sync also generates SessionStart hooks for Claude Code so recall fires automatically at the top of every session.