Last week, in one of the OpenAI DevDay interviews, Sam Altman said he expects zero-person startups in years to come. Meanwhile, OpenAI's support agent seems to be struggling to answer correctly. I ran into trouble while finetuning its model to invoke MCP calls consistently. So I asked that question on email and the support agent came up with all sorts of wrong answers. I was about to give up when an OpenAI engineer jumped in and fixed it. He spoke to his MCP team and told me that finetuning MCP calls are not supported yet and asked me to use tool calls instead. So why is it that the AI of the leading lab can't automate the first job that's supposed to go away because of AI automation?
AI reads old news
The easiest thing to blame is hallucination. So the idea goes that when labs fix hallucination, we can automate this and many other jobs. But that's not enough. AI might become smarter than humans, but humans still have real-time access to bugs and can acknowledge them. AI, on the other hand, has been instructed to answer with static information.

(A guy reading old newspaper finds out, Gandhi is dead)
Context is everything
To automate support, you need two types of context, and each favors different company sizes:
- Historic context: All past information from your website, support forum, and bug tracker brought into the LLM as RAG or MCP tools. This is a daunting task for large organizations with decades of legacy systems, but relatively easy for competent startups building from scratch.
- Real-time context: New bugs, features, and org changes happening between people right now. Tracking these changes is easy for slow-moving big companies where all processes are already documented, but daunting for fast-moving startups where crucial decisions happen in Slack threads and hallway conversations.
Here's the paradox: if you're big, historic context from legacy systems is hard to integrate. This is going to require an army of integration consultants, leading to more jobs instead of fewer.
If you're small, fast-moving startup, tracking real-time changes is hard. The only way to keep AI updated on real-time changes (like that MCP bug) is to go full surveillance mode: listen to all conversations between employees, from email to phone to hallway chitchat. But comprehensive workplace surveillance might not even be legal.
The uncomfortable question
Ironically, there's one scenario where the context problem nearly disappears: the one-person company. When there's only one founder, there are no conversations to surveil, no organizational knowledge to capture, no coordination overhead. All context lives in one person's head and the AI as his brainstroming partner.
This leaves founders with a hard question: Should you even try to build a team-based company right now? If you do hire a team, are you willing to record all internal communications to give your AI the context it needs to compete? Or should you just build solo as long as possible, accepting people constraint but your AI will actually work?
There's no good answer yet. But the founders who pretend this tradeoff doesn't exist will be the ones complaining in two years that AI didn't live up to the hype.
0 likes
0 comments
Like
Add your comment