What Ezra remembers (and what he forgets)
Memory is what makes Ezra useful over time. Memory is also a privacy risk. Here's the architecture, the trade-offs, and why we built it the way we did.
An AI without memory is a stranger every time. You re-explain who you are, what you do, what you like, what you don't, what you want. The first conversation is fine. By the fortieth, it's exhausting.
An AI with memory but no constraints is a different problem. It quietly hoards facts about you, encodes patterns you didn't know it noticed, and one day surprises you with how much it knows.
We wanted memory that earns its keep without becoming creepy. Here's what that looks like.
The three layers
Ezra's memory is built in three distinct layers, each with a different job and a different lifespan.
Layer 1: Profile (the slow stuff)
This is what doesn't change much. Your name. Your phone. Your role. Your timezone. The names of the people you mention often (your spouse, your kids, your key clients). Your basic preferences — short replies vs. long, casual vs. formal, "Sarah" vs. "Mrs. Martinez."
You can ask "what do you know about me?" and Ezra shows you this layer in plain English. You can correct it ("I'm based in Seattle now, not Portland") and the change is instant.
This layer is what makes Ezra feel like he knows you, instead of like he's meeting you for the hundredth time.
Layer 2: Episodic memory (things that happened)
Past conversations. Decisions you made. Drafts you approved or rejected. Specific events ("user mentioned planning daughter's birthday party on June 12"). These are the moments that might matter later.
By default, this layer is kept for 90 days, then automatically deleted. You can extend that to a year if you want long-term context, or shorten it to 24 hours if you want maximum privacy. Either way, it's your call, and you can change it any time.
When you text Ezra about something — say, "what was that book Sarah recommended?" — this is the layer that gets searched. Episodic memory is what makes Ezra useful for the long-running conversations of your life.
Layer 3: Learned patterns (the rules)
This is the layer that compounds. Over time, Ezra notices repeated behaviors. You always decline meetings on Fridays. You never reply to recruiter emails. You always sign off "— Sarah." You always reschedule clients to Tuesday afternoons.
When Ezra notices a pattern, he doesn't just start acting on it. He surfaces it:
You confirm or reject. Confirmed patterns shape his future suggestions. Rejected ones don't.
Crucially: Ezra never silently encodes a behavior. If he's going to start doing something differently, you know about it first.
What we don't keep
We don't keep the contents of emails Ezra reads on your behalf. He pulls a thread, processes it, drafts a reply, then drops it. We keep the metadata of what happened — "drafted reply to Sarah on March 5" — not the email itself.
We don't keep your calendar events. Same pattern. He reads what he needs to answer your question, then he's done.
We don't keep voice recordings. We don't keep biometric data. We don't keep browsing history. We don't keep your contacts list unless you explicitly share it for a specific task.
The principle: store the smallest amount of state that makes the next interaction useful. Not "store everything in case it's useful later."
How to see what he knows
Three commands, available any time:
- "What do you know about me?" — Ezra shows you the profile layer, in plain English. Everything he's stored.
- "Forget [X]" — Ezra removes that specific item. "Forget that I'm vegetarian" — gone.
- "Delete me" — Ezra wipes everything. The whole account. Within 24 hours, gone. Backups within 30 days. No retention loophole.
None of these are buried in a settings menu. They're commands you text Ezra in your normal thread. The same way you'd ask any of the other things you ask him.
Why this is different from "the AI that learns from you"
Most products that say "AI learns from you" mean: every interaction goes into a training dataset that improves the model for everyone. Your data makes the product better, including for other users, possibly forever.
Ezra doesn't do that. The model — Anthropic's Claude — never trains on your data. We use the API explicitly configured to not contribute to training.
What "learns" is the personalized context layer above the model. That's yours alone. It exists for you. It dies when you tell it to. It does not flow back into the model's weights, does not improve other users' experiences, does not get sold or shared.
Your patterns stay yours. The model stays general. The personalization happens in the middle, where you control it.
The trade we made
An AI with no memory at all would be safer in a narrow sense — nothing to leak, nothing to misuse. But it would also be useless after the first hour. You'd be re-explaining yourself constantly. People wouldn't put up with it for long.
An AI with unconstrained memory would be useful, but it would also be a perpetual privacy crisis. Every interaction is a new disclosure. Every fact is a new liability.
We picked the middle. Memory that earns its keep, that's visible to you, that you can edit and delete, that doesn't train any models, that decays on its own when it's not being used. Useful enough to feel like a friend who knows you. Bounded enough that you can sleep at night.
You can ask Ezra what he knows about you any time. Try it. The answer should never surprise you. If it does, that's a bug, and we want to know about it.