Feb 27, 2026 · 6 min read
AI Memory Without the Overhead: Using Obsidian as Your Agent's Second Brain
Why I'm using plain markdown files for AI agent memory instead of vector databases and semantic search. Sometimes the boring solution is the right one.
TL;DR
Your AI agent can use your Obsidian vault as persistent memory via MCP. No vector embeddings required. Just markdown.
Because apparently the simplest solution is sometimes the right one. I'm as surprised as you are.
I've been using Obsidian as my second brain for a while now. PARA structure, daily notes, project folders - the whole thing. It's where my ideas go to either flourish or quietly decompose in the Archive folder. No judgement either way.
So when I started thinking about giving AI agents persistent memory, my first instinct was predictably over-engineered. Vector embeddings. Semantic search. A lovely RAG pipeline with chunking strategies and embedding models and probably a Kubernetes cluster because why not.
Then I stopped. Looked at my Obsidian vault. Looked back at the architecture diagram I was sketching.
Why am I building a second brain for AI when I already have one sitting right there?
The Problem with AI Amnesia
Every conversation with Claude starts fresh. Great for privacy. Terrible for continuity.
"Remember that WAFR Discovery script we were debugging last week?"
No. No it does not.
"What about my preference for Terraform over CloudFormation?"
Gone. Every time.
You end up re-explaining context, re-stating preferences, re-providing background. It's like working with a brilliant colleague who gets hit with a Men in Black memory wipe at the end of every conversation.

The Over-Engineered Solution
The internet will tell you to build something like this:
Embed everything. Store vectors. Semantic similarity search. Cosine distances. Probably a Pinecone subscription.
Is it clever? Yes. Does it work? Also yes.
But here's the thing - I'm trying to remember that I prefer tables over bullet lists, not searching through a million documents for semantic similarity to "what does the customer think about hybrid cloud."
The Boring Solution
What if the AI just... read my notes?
That's it. That's the architecture diagram.
MCP-Obsidian is a Model Context Protocol server that gives Claude direct read/write access to your vault. No embeddings. No vector store. Just file operations on markdown.
How It Actually Works
I've set up a 5 Agent Memory/ folder in my vault with a simple structure:
5 Agent Memory/
├── _context.md # Who I am, preferences, current priorities
├── sessions/ # Summaries of significant work
├── learnings/ # Things the agent has learned about me
└── working/ # Scratchpad for in-progress tasks
When I start a session that needs context, the agent reads _context.md. When we make decisions worth remembering, it writes to sessions/. When it discovers I have strong opinions about Terraform modules not being multi-cloud (fight me), it can propose adding that to learnings/.
The whole thing is human-readable, human-editable, and already synced across my devices because it's just files in my existing vault.
But What About Semantic Search?
Here's the thing - for personal knowledge management, you usually know what you're looking for. "What did we decide about the WAFR Discovery tool's tagging logic?" is a keyword search, not a semantic one.
MCP-Obsidian has a search_notes function. It searches content. It works.
If you genuinely need semantic search across thousands of documents, fair enough. Build the RAG pipeline. But for "remember my preferences and what we worked on last week"? Grep works fine.
The Setup
Add MCP-Obsidian to your Claude Code config:
{
"mcpServers": {
"obsidian": {
"command": "npx",
"args": ["@mauricio.wolff/mcp-obsidian@latest", "/path/to/your/vault"]
}
}
}
Windows users (like me) need the cmd /c wrapper because of course they do:
{
"mcpServers": {
"obsidian": {
"command": "cmd",
"args": ["/c", "npx", "@mauricio.wolff/mcp-obsidian@latest", "C:\\path\\to\\vault"]
}
}
}
Create your memory folder structure. Write a _context.md with the basics - who you are, what you're working on, how you like things done.
Done. Your AI now has a second brain. It's the same one you already have.
What You Get
| Capability | How It Works |
|---|---|
| Persistent context | Agent reads _context.md for preferences and priorities |
| Session continuity | Summaries in sessions/ let you resume complex work |
| Accumulated learning | learnings/ builds up knowledge over time |
| Full visibility | It's markdown - you can read and edit everything |
| No infrastructure | No databases, no embeddings, no API costs |
| Already synced | If your vault syncs, so does your AI memory |
What You Don't Get
Let's be honest about the trade-offs:
- No semantic search - keyword search only (good enough for most use cases)
- No automatic summarisation - you decide what's worth remembering
- Scale limits - this is personal knowledge management, not enterprise RAG
- Manual structure - you need to set up and maintain the folder structure
If you're building a production system that needs to search millions of documents with semantic understanding, this isn't it. But for extending your existing second brain to your AI tooling? It's hard to beat.
The Real Win
The best part isn't the technology. It's that everything stays in one place.
My project notes, my daily journal, my reference material, and now my AI's memory of working with me - all in the same vault. All in markdown. All searchable, syncable, and under my control.
When I archive a project, the AI's session notes go with it. When I update my priorities, the agent sees them next time it reads _context.md. There's no synchronisation problem because there's nothing to synchronise.
Is it perfect? No. Does it work? Yes. Should you build a vector database instead?
It depends.
If you're using Obsidian and want to try this, MCP-Obsidian is on GitHub. I've also built some Claude skills for vault management and agent memory that I'll probably write about at some point. Or not. The Archive folder is full of good intentions.
Got a better approach? I'm always happy to be proven wrong if it means a cleaner solution.
✦ Key Takeaways
- MCP-Obsidian gives Claude direct read/write access to your vault - no vector store or embeddings needed
- A simple folder structure (context, sessions, learnings) is enough for persistent AI memory
- Everything stays in one place: your notes, your references, and your AI's memory - all in markdown
- Keyword search handles most personal knowledge queries - semantic search is overkill for preferences and session context