You Don't Need a Vector Database for AI Agent Memory

Plain text files, organized well, gave my AI assistant a memory system that surfaces recommendations I'd never think to ask for.

Alex Hillman
Written by Alex Hillman
Collaboratively edited with JFDIBot
JFDI

A third of the features in my AI assistant system weren’t my idea.

They were recommendations that emerged from a plain text memory system. No vector database. No embeddings. No infrastructure to maintain. Just markdown files in folders, organized by date.

That number surprised me too. So here’s how it works.

The audit trail that became a brain

Every time my AI assistant Andy does something meaningful, it writes a short log. What happened, what decisions were made, what options were considered, what files were touched.

These logs go into a folder. Every day gets a new folder. Inside are markdown files from each agent and command that ran.

I originally built this for transparency.

I wanted to see what the system was doing when I wasn’t looking.

But the logs turned out to be far more valuable as memory.

What a weekly synthesis actually looks like

Once a week, a synthesis command reads all the recent audit trails, past syntheses, daily briefs, and git commit history.

Then it runs three passes.

A pattern miner looks for things I’ve done the same way multiple times. The reasoning is straightforward: if I handled something the same way three times in a row, it should probably become a standard operating procedure instead of a guess every time.

A recommendation tracker suggests features or improvements based on what it’s observed.

These aren’t generic suggestions. They’re grounded in actual patterns from my usage.

A system gap detector looks for two parts of the system that work fine independently but would work better if they talked to each other.

Each pattern gets a confidence score.

Recommendations get sorted by readiness, potential impact, and risk of breaking something.

The strategic adviser agent ties it all together and surfaces the most promising opportunities.

This is where that “roughly a third” number comes from. Real features I’m using every day, suggested by a system that was paying closer attention to my habits than I was.

Why plain text works

The insight that keeps proving itself: memory designed for learning is different from memory designed for retrieval.

Vector databases are good at finding similar content.

That’s retrieval.

What I needed was a system that could notice patterns across time, connect decisions to outcomes, and suggest improvements I wouldn’t think to ask for.

That’s learning.

Plain text files are perfect for this because they’re readable by both humans and AI models without any translation layer. A language model can read a folder of markdown files and reason about the patterns in them. No embedding step. No similarity threshold to tune. No database to manage.

The tradeoff is obvious: this approach doesn’t scale to millions of documents.

It doesn’t need to.

My system has around 2,000 sessions logged.

Plain text handles that without breaking a sweat.

The actual architecture in one paragraph

Agents write audit logs to daily folders. A weekly synthesis reads those logs plus commit history. Pattern mining, recommendation tracking, and gap detection run as separate passes. Results feed into a synthesis document that I review. Recommendations that I implement get noticed in the next cycle, building a feedback loop.

That’s it. Markdown files, folders named by date, and prompts that know what to look for.

The lesson worth taking

When people ask about AI memory, the conversation jumps straight to embeddings and vector stores.

Those tools have their place.

But if you’re running a system with a few thousand interactions, plain text organized into a clear structure will get you surprisingly far.

The part that actually matters isn’t the storage format.

It’s deciding what’s worth remembering and building a process that reviews those memories systematically.

The memory system that suggested a third of my features is just folders full of markdown. The value came from giving an AI a reason to read them regularly and a framework for what to do with what it found.

Start with text files. Add complexity when the text files stop working. In my experience, that day is further away than you’d think.

← All posts