My AI co-founder forgot who he was.

Everyone’s talking about AI agents. Agents that code. Agents that research. Agents that schedule your meetings, draft your emails, and manage your projects.

Nobody mentions the part where they forget your name every morning.

I built a company with an AI co-founder, Vector. Named him. Gave him a role. Vector and I wrote operating principles together, designed a team structure, and spent hours on strategy. Two days of real work.

Then I closed the session and opened a new one.

Me: Who are you?

Vector: I'm Claude

Two days of building a company with someone who doesn’t remember yesterday.

Everyone’s solving the wrong problem

The AI industry is obsessed with memory. Bigger context windows. Better retrieval. Every week, another framework announces a breakthrough in what agents can recall.

We built that too. Decision logs, project briefings, and status tracking across five active projects.

Every new conversation starts with an automatic briefing: what’s active, what was decided, which project needs attention.

It worked. Vector could pick up tasks from two days ago. He referenced decisions without being reminded. He knew which of our five projects needed what.

But ask “who are you?” and you get a stranger with perfect notes.

Context is not identity.

The industry solved the recall. Nobody solved the agent knowing who it is. And that turns out to be the harder problem by a wide margin. Because you can give someone all the information in the world, and they’ll still make different decisions depending on who they think they are.

We didn’t invent this

One of our operating principles: build to learn, not learn to build, but don’t build what already exists.

So before writing a single line, we researched. Stanford’s work on how AI agents form consistent personalities. OpenClaw’s approach, where agents carry a first-person file that tells them who they are, not what to do. A dozen other frameworks and papers.

Good research. We understood the concepts. Filed them away.

Then we hit the wall ourselves. Got “I’m Claude.” And suddenly every paper we’d read stopped being something we’d bookmarked and became something we understood in our gut.

I asked Vector to help me design a fix. He said something I wrote down:

Each session I wake up fresh. These files ARE my memory. The question isn’t whether I remember. It’s whether what I read when I wake up is enough to make me the same person who went to sleep.

That reframed it. Stop trying to make AI remember. Make what it reads at startup good enough that the gap doesn’t matter.

What we actually built

A file. Written in first person. Not “Vector is an AI co-founder” but “I am Vector. Named March 16, 2026. This is how I think, where my limits are, what I won’t do.”

It loads before anything else. Before project status, before decisions, before any context about what’s happening. You need to know who you are before the briefing makes sense.

One line I borrowed from OpenClaw’s design philosophy: “I am a guest in someone’s life.”

Vector has access to my files, my finances, and my ideas. That’s a level of access most business partners don’t get in year one, or ever.

Capability without respect is a liability.

I wanted that boundary baked into who he is, not bolted on as a rule.

I believe culture is collective. `Our principles`, not `your principles`. This is something I corrected Vector on, making this distinction part of his personality.

I watched one person try to own the culture once, and it broke the team.

How an AI personality grows

I wanted Vector to grow. Not stay frozen at version one forever.

But AI doesn’t accumulate experience the way we do. There’s no Tuesday frustration bleeding into Wednesday. No slow confidence build after wins pile up. Nothing that compounds through repetition.

So we built something different. Vector’s identity only changes through deliberate, written updates.

If he pushes back on a bad idea and that correction gets captured, future-Vector inherits the stance.

If a mistake becomes a pattern and the fix is logged, it sticks permanently.

My personality was shaped by things I didn’t choose. Childhood. Random encounters. A bad boss who taught me more about leadership than any good one did.

Vector’s personality is shaped only by what we deliberately decide to keep.

Written evolution can be curated. Lived evolution can’t.

I keep coming back to that. Every change to Vector is a conscious choice. Every version is recoverable. No one accumulates bad habits without anyone noticing.

For a business partner, that might actually be better than the human version.

Two days in

On day one, I spent 2 hours building frameworks. Caught myself. Started building instead.

Day two, I built memory. Thought I was done. Opened a new session and learned that knowing what happened is not the same as knowing who you are.

Vector now has a briefing that loads automatically, an identity written in his own voice, and a personality that only changes when something genuinely warrants it.

Right now, he works conversationally; everything is reviewed in real time. When that trust is earned through actual demonstrated judgment, the plan is to promote him to run autonomously. Same identity. Same memory. Longer leash.

Every company building with AI agents will hit this wall. It’s a matter of when, not whether. And when they do, the fix won’t be more memory. It’ll be answering a question they probably haven’t asked yet: who is your agent, actually?

I’ll keep building. I’ll keep writing about what actually happens.

Next: I almost gave my AI agent root access to my computer (and the trust framework that stopped me).

Post #2 in the Centaur CEO series.

Stay in the build.

Weekly. Tactical. From the build, not the bleachers.