See Cabinet
in action
Real demos, community builds, and coverage of Cabinet from around the web.
Stop Losing Context in AI Conversations — Meet Cabinet, the Open Source Fix
by Tom Granot · SyntaxGTM on YouTube ·
Tom Granot walks through the problem every AI-native developer hits: context evaporates between sessions, prompts, and tools. Cabinet is the open-source fix — a persistent, file-based knowledge base your agents can actually read from and write to.
Churn is just graduation you didn't design for.
- “Cabinet, Mem, and NotebookLM show what happens when a personal knowledge base hits its ceiling.”
- “Cabinet is optimized for what I'd call autonomous compounding — a system where the knowledge base writes itself, continuously, without the user's active participation.”
- “Hila Shmuel is a former Engineering Manager at Apple who left to build Cabinet in public, with the open-source community.”
- “Cabinet was her answer to that. Launched in 2026, it's free, self-hosted, and built on the belief that your AI context should live on your machine — not in a vendor's cloud.”
- “The tagline says it plainly: one knowledge base, AI agents that remember everything.”
- “Cabinet needs a quality signal — some way for users to see whether agent outputs are being used, or quietly accumulating unread.”
by Daria Littlefield in Do Not Churn ·
Daria Littlefield analyses why personal knowledge bases churn when users outgrow them — and groups Cabinet with Mem and NotebookLM as products that have to design for that ceiling.
LLM Knowledge Bases, The Karpathy Effect & The Solution
by Tom Granot · SyntaxGTM on YouTube ·
A deep dive into why LLM knowledge bases are the next frontier for AI-powered development — exploring Andrej Karpathy's vision for software 2.0, why context quality determines AI output quality, and how Cabinet solves the knowledge gap for developer teams.
Why knowledge bases matter now
LLMs don't know your codebase, your team's decisions, or your project history. Every time you start a new session, that context is gone. The Karpathy Effect — the compounding value of feeding rich, structured context into a model — only works if you have a place to store and retrieve that context reliably.
Most developers are still copy-pasting files and hoping for the best. There's a better way.
Cabinet is the solution
Cabinet gives your AI agents a persistent, structured memory of everything that matters: your docs, decisions, architecture, and tribal knowledge — all indexed and ready to inject into any LLM context window.
Stop re-explaining your stack on every prompt. Let Cabinet handle the context so you can focus on building.
Want to discuss? Join the community.
Join Discord