Midbrain

AI can't learn what it can't remember.
We're building the memory and continual learning layer for agents.

Agents run in short loops.

Context in. Inference out. Reset.

They recall, but they don't update.
Twice the memory, same mistakes.

Storage isn't intelligence.
Learning is.

To improve, a system has to change because of what it saw — not just remember seeing it.

Today
context inference reset
What's missing
experience memory update behavior

Without the update step, memory is just storage.

We don't ship a vector store. We ship three memory substrates under a learned controller — mirroring how human memory is actually organised.

Three substrates: episodic, semantic, procedural — one store per time horizon
The controller: ADD, UPDATE, LINK, FORGET as a learned policy

Episodic holds raw traces. Semantic consolidates them into notes and a temporal knowledge graph. Procedural captures the patterns the agent starts to reuse.

A learned controller decides — on every event — what to add, update, link, or forget. Memory is a policy, not a write.

Every memory product on the market answers the same question: what did we see?

We think the question that matters is different.

Retrieval asks 'what did we see?' Experience asks 'what did we learn?'

Retrieval makes the agent faster at being wrong.
Experience makes it right.

You don't earn the right to build the learning layer without first winning the retrieval layer. SmartSearch is ours — an index-free, structured retrieval system for agents operating over long horizons.

93.5% LoCoMo
88.4% LongMemEval-S
8.5x Token Efficiency
~650ms CPU Latency

See It In Action

We pointed SmartSearch at the Linux kernel (~2GB) and raced it against an LLM doing grep-and-tool-use. As tasks get longer, SmartSearch keeps reasoning grounded by ranking the most relevant memories instead of expanding context. No massive semantic index — stable performance across long execution chains.

SmartSearch on the Linux kernel (~2GB)

Benchmark Comparison
System LoCoMo LongMemEval-S
EverMemOS 92.3% 82.0%
Memora 86.3%
MemOS 80.8% 77.8%
Mem0 68.4% 66.4%
Zep 71.2%

Phase 1 — Memory infra. Best-in-class retrieval and structure. Shipping now.

Phase 2 — Experience graph. Every interaction becomes a structured trace: who, what, where, when, why, outcome.

Phase 3 — Continual learning engine. User corrections become training signal. The controller updates. The agent adapts.

We're not building better retrieval.
We're building agents that get better with use.

We're working with a small number of design partners building long-running AI agents. If that's you — we want to hear about it.