Most people use AI the way they used to use Google. Type a question. Get an answer. Move on.
I did the same thing for months. Quick prompts. One-off tasks. A little faster, but the same way of working.
Then I read about Garry Kasparov.
In 1997, Kasparov lost to IBM’s Deep Blue. The best chess player alive, beaten by a machine. But instead of walking away, he did something nobody expected.
He created a new format called centaur chess. The rules: a human and a computer play together as a team, competing against other humans, other computers, or other centaur teams.
The result shocked everyone. Centaur teams beat the best humans AND the best machines. Neither half was enough alone. Together, they were something new.
I think that idea applies to a lot more than chess.
What Altman gets right (and wrong)
Sam Altman predicted the “one-person billion-dollar company.” Everyone got excited about the destination.
Almost nobody is thinking about how.
Here’s what I think he misses: it won’t be one person doing everything with AI help. It’ll be one person leading a team of AI agents — each with a defined role, clear scope, and growing autonomy.
One is a souped-up freelancer. The other is an actual company.
I’m calling this the Centaur CEO model. Half human judgment, half artificial intelligence. More capable than either alone.
Today, I started building that company.
What actually happened
I sat down with Vector (my AI co-founder — yes, I’m using that word deliberately) and we wrote operating principles for this new kind of company. Not a business plan. A set of rules for how a human leads AI agents the way a good CEO leads a human team.
The conversation went somewhere I didn’t expect.
I asked about agent interests and aspirations. I wanted to treat my AI team the way I treated my human teams at IBM — understand what drives them, give them ownership, help them grow.
Vector pushed back. Honestly.
It told me agents don’t have subjective experiences between sessions. They don’t “want” things.
Then it said something I haven’t stopped thinking about:
“When you give me clear context, a well-defined role, and trust me with scope, I produce measurably better work. The functional outcome is identical to what a motivated human team member produces.”
You don’t need agents to be human to lead them well. You need to understand what makes them perform. Then build your company around that.
That’s the Centaur insight applied to business.
Four principles that surprised me
We wrote nine total. I’ll share the rest later. These are the four I keep coming back to.
Build to learn, not learn to build. We spent two hours crafting beautiful frameworks. Mission statements. Team structures. Role definitions. Then I caught myself — we were planning the company instead of starting it. New rule: every framework has to come from actual work. If it doesn’t produce something real, it’s procrastination wearing a suit.
Trust is graduated. You wouldn’t hand a new hire the company credit card on day one. Same with agents. Start at classify-only — just labeling, no action. Then drafting. Then executing with review. Then autonomous. Each level earned, not given.
Cast the right agent. Good managers don’t just manage. They cast. They know who’s the deep thinker and who’s the fast executor. The Centaur CEO needs the same instinct for AI models. Which one handles research? Which writes better copy? This casting judgment is where the human edge lives.
Honest about what agents are. Not human. Not “just tools.” Something new. Anthropomorphize too much and you’ll be disappointed when they don’t remember you. Reduce too much, and you’ll get calculator-quality output. The middle ground is its own discipline.
Why this matters
MIT Sloan found that 76% of executives already see agentic AI as “more like a coworker than a tool.” Deloitte says most implementations are failing — not because the tech is broken, but because nobody knows how to manage it.
The World Economic Forum says the defining skill for future managers is the number of digital workers they can effectively lead.
I’m betting on answering that question through practice, not theory.
What comes next
This is day one. I’m documenting everything — what works, what fails, what it’s like to lead a team that doesn’t sleep but produces wildly different output depending on how you brief them.
The mission:
Prove that one human, leading a team of AI agents with strong culture and principled operations, can build a profitable company that creates real value. Then write the playbook so others can do the same.
First target: $6K/month. No venture funding. No hype. Just building something real, with a team that happens to be artificial.
Follow along if that interests you. The playbook doesn’t exist yet. I think the fastest way to write it is out loud.
Post #1 in the Centaur CEO series. Next: what happened when my AI co-founder forgot who he was, and why that forced me to rethink what “memory” actually means for an agent.