From Chat Interface to Agent Platform
The architectural evolution from a simple AI chat app to an autonomous agent platform — told through the decisions that shaped it.
Every platform starts as a feature
Google started as a search box. Slack started as a chat room. AWS started as a server rental.
LikeClaw started as a chat interface. You typed a message. An AI responded. That was it.
Eighty-four days later, it’s an autonomous agent platform with sandboxed execution, multi-model support, a skills marketplace, background task scheduling, and a self-healing CI pipeline.
This is the story of that evolution — not as a planned roadmap, but as a series of architectural decisions, each one opening doors we didn’t know existed.
Stage 1: Chat (November 21-22)
The first two days produced a chat interface. Not a toy. A real chat system with:
- Session management (multiple conversations, each preserved)
- Streaming responses (tokens appear in real-time)
- Agent switching (different AI personalities for different tasks)
- File system integration (the AI can read and write files)
At this stage, LikeClaw was a better ChatGPT. Same interaction model — human asks, AI answers — but with persistent files and specialized agents.
What made this foundation work for everything that came later: the separation between the chat layer and the agent layer. The chat UI didn’t know or care what the agent was doing behind the scenes. It just displayed messages. This meant we could make the agents dramatically more capable without touching the chat code.
Stage 2: Workspace (November 22-28)
By the end of week one, we had workspaces. A workspace is a persistent environment: conversations, files, agents, settings, all grouped together.
This sounds simple. It was actually the most important architectural decision of the project.
Without workspaces, every conversation is ephemeral. The AI doesn’t remember context between sessions. Files aren’t organized. There’s no concept of “the project I’m working on.”
With workspaces, users create durable contexts. A “Marketing” workspace has marketing files, marketing agents, and marketing conversation history. A “Development” workspace has code files, coding agents, and technical conversations. Each workspace is a world.
The key insight: workspaces are the unit of context. Everything else — agents, files, schedules, settings — lives inside a workspace. This made later features trivially easy to scope. “Add scheduling” became “add scheduling per workspace.” “Add settings” became “add settings per workspace.” The workspace boundary contained complexity.
Stage 3: Multi-agent (December 12-27)
Month two brought the agent explosion. Instead of one AI personality, we built specialized agents:
- Chat agent: General-purpose conversations with file awareness
- UX agent: Design-focused with Perplexity integration for research
- PM agent: Project management with task delegation capabilities
- Studio agent: Creative work with image generation tools
- Image master: Specialized in visual content creation
Each agent has its own system prompt, its own tools, and its own personality. The chat-ui component didn’t change — it still just displayed messages. But the agents behind it became dramatically more capable.
The architectural win: agents are configuration, not code. Adding a new agent doesn’t require changing the platform. It requires defining a system prompt, selecting tools, and configuring behavior. We store agents in the database. Users can eventually create their own. The platform doesn’t care how many agents exist or what they do.
Stage 4: Autonomous (December 2 - January 31)
This is where things got interesting. Every stage before this assumed a human was sitting at a keyboard, watching the AI work. Stage 4 removed that assumption.
Scheduling came first. Users define tasks that run on a schedule. The AI wakes up, loads the workspace, executes the task, saves results, and notifies the user. No human present.
Background tasks came next. During a conversation, the AI can delegate long-running work to a background agent. “Analyze this codebase” might take 20 minutes. The user doesn’t need to wait. The background agent runs in its own E2B sandbox, and the results appear when ready.
Event-driven architecture tied it together. Every action — task completed, file created, agent finished — generates an event. Events appear in the user’s inbox. Events trigger notifications. Events are the connective tissue between autonomous agents and the humans who manage them.
The hard part wasn’t building the features. It was rethinking assumptions. When a user is present, errors can be shown in a dialog box. When no user is present, errors need to be captured, logged, and surfaced later. When a user is watching, partial progress is informative. When no one is watching, only the final result matters.
Stage 5: Platform (February 1-13)
The final evolution: from a product that we control to a platform that extends itself.
E2B sandboxes made execution safe and scalable. Every task runs in isolation. We can run a hundred tasks simultaneously without them interfering with each other.
The skills marketplace lets users (and eventually third parties) create reusable automation packages. Skills are reviewed for security before publishing — unlike open marketplaces with documented security issues.
Self-healing CI means the platform literally fixes its own bugs. Claude reads GitHub issues, writes fixes, creates PRs, and reviews code. The development process is partially automated.
The eval framework measures agent quality systematically. We don’t just hope agents work well — we measure it across dozens of scenarios and track quality over time.
The pattern behind the evolution
Looking back, the pattern is clear: each stage expanded the boundaries of what “AI agent” means.
Stage 1: AI responds to human input. Stage 2: AI works within durable contexts. Stage 3: AI specializes for different tasks. Stage 4: AI works without human supervision. Stage 5: AI extends the platform itself.
None of these stages required rewriting the previous ones. The chat-ui from week one still works. The workspace model from week two still organizes everything. The agent system from month two still powers all interactions.
Good architecture isn’t about predicting the future. It’s about building foundations that can support a future you haven’t imagined yet.
We started with a chat interface. We ended with a platform. And the foundation we built on day one is still holding it all up.
The five stages of becoming a platform
- 1
Stage 1: Chat
Week 1. Users talk to an AI. It responds. The core interaction loop. Simple but essential.
- 2
Stage 2: Workspace
Week 2. Users organize conversations into workspaces with persistent files. Context becomes durable.
- 3
Stage 3: Multi-agent
Week 4. Different agents for different tasks. A coding agent. A creative agent. A PM agent. Specialization.
- 4
Stage 4: Autonomous
Week 6. Scheduled tasks. Background execution. The AI works without the user present.
- 5
Stage 5: Platform
Week 10. Sandboxed execution. Skills marketplace. Self-healing CI. The product extends itself.
Questions about platform evolution
Did you plan all five stages from the beginning?
We planned the first three. Stages 4 and 5 emerged from user needs and technical opportunities. The key was building an architecture in stages 1-3 that could absorb stages 4-5 without a rewrite.
What was the hardest transition?
Stage 3 to 4 — going from interactive to autonomous. Interactive systems assume a user is present to handle errors and make decisions. Autonomous systems need to handle everything on their own. That required rethinking error handling, state management, and result delivery.
Is the architecture done evolving?
Not even close. We're working on multi-agent collaboration, where multiple agents coordinate on complex tasks. The platform architecture is designed to keep evolving.