Skip to main content

17 Releases in 6 Days: How We Shipped MCP Servers

From v2.2 to v2.19 in one week — MCP servers, locale awareness, video pipeline, and why shipping speed matters for AI agents.

One week. Seventeen releases.

17

Releases shipped

8

Major features

6

Days elapsed

v2.2→v2.19

Version jump

The week that wasn’t planned

March 5, 2026. We’d just published a blog post about teaching our AI to schedule itself. Schedules were working. The agent could wake up at 6 AM, run a task, and deliver results before your morning coffee. We were supposed to spend the next week on polish and stability.

That didn’t happen.

Instead, we shipped 17 releases in 6 days. Version 2.2.0 became version 2.19.0. Eight major features landed. And the most important one — MCP servers as first-class entities — wasn’t even on the roadmap 10 days ago.

Why MCP servers matter right now

Model Context Protocol is becoming the standard for how AI agents connect to tools. Think of it as USB for AI: a universal plug that lets any agent talk to any service without custom integration code.

The timing isn’t accidental. The AI agent market is consolidating around a few key standards, and MCP is winning. Anthropic published the spec. OpenAI adopted it. Every serious AI agent platform either supports it or is scrambling to.

OpenClaw added MCP support months ago. But their implementation requires local installation, manual YAML configuration, and runs without sandboxing — meaning a malicious MCP server has the same unrestricted access to your machine as everything else in their security model. We took a different approach.

What we built

MCP servers in LikeClaw are first-class entities. They show up in the Skills page with their own dedicated tab. You can browse, add, configure, and remove them from a clean UI — no config files, no terminal commands.

In the chat input, typing # now autocompletes both skills and MCP servers. Want your agent to search the web mid-task? Reference the DuckDuckGo MCP server. Need to query a database? Add the PostgreSQL MCP server and reference it. The agent handles the rest.

Every MCP server runs inside the same sandboxed environment as everything else on LikeClaw. A compromised MCP server can’t read your files, steal your credentials, or install malware. It can’t do anything outside its sandbox. When the task ends, the sandbox is destroyed.

That’s the difference between “we support MCP” and “we support MCP safely.”

The other seven features

MCP servers were the headline, but the rest of the week wasn’t idle.

Locale and timezone awareness landed on March 8. Your agent now knows what language you prefer and what time zone you’re in. This sounds trivial until you realize that an agent scheduling a task for “tomorrow morning” needs to know whether “morning” means 8 AM in Tokyo or 8 AM in Berlin. The user’s locale is persisted and injected directly into the agent’s system prompt.

Profile and memory settings shipped on March 4. Users can now control what the agent remembers across sessions — a new settings page accessible from the sidebar. This is the foundation for persistent agent memory that actually respects user preferences.

Stop inference went live on March 6. You can now halt a running LLM response mid-stream. The message queue ensures that stopping a response doesn’t corrupt the conversation state. This seems obvious in hindsight, but the engineering was tricky: you’re interrupting an active stream while maintaining consistency across the message history.

Schedule hardening continued from the previous week. We fixed an infinite loop bug, resolved a conflict between TTL indexes and environment-context date injection, and added a new endpoint for creating feeds with attached schedules. The scheduling system is now significantly more robust.

Workspace-first intelligence was a single commit on March 6 that changed agent behavior meaningfully. Before, the agent would sometimes search the web for information that was already in the user’s workspace files. Now, a “workspace-first” prompt rule ensures the agent checks your uploaded files before going external.

The video pipeline is internal tooling, but worth mentioning because it reflects how we think about shipping speed. Automated subtitles, logo intros, thumbnail generation, and YouTube upload — all scripted. We’re producing demo videos for every major feature, and the pipeline means a 2-minute demo takes 10 minutes to produce end-to-end, not 2 hours.

Billing domain matching added subdomain-based pre-approval for enterprise billing flows. Smaller feature, but it unblocked a specific enterprise deployment.

The velocity question

Seventeen releases in six days raises the obvious question: are you moving too fast?

The answer depends on your architecture. If every release requires a QA cycle, a staging deploy, a manual sign-off, and a coordinated push — then yes, 17 in 6 days would be reckless.

But we don’t work that way. Every feature ships behind a sandboxed execution model. Code runs in isolated containers. A bug in the MCP server handler can’t take down the scheduling system. A bad deploy doesn’t corrupt user data because user data lives in encrypted storage, not in the application layer.

The cost of shipping a broken feature is low because the blast radius is contained. The cost of not shipping is high because the market is moving fast. So we ship.

Auto version bumps on push to main. CI that catches regressions before they reach production. A deploy pipeline that went from Node 20 to Node 24 this week without breaking a single user session.

This isn’t about moving fast and breaking things. It’s about moving fast because you’ve built the infrastructure that makes speed safe.

What this means for OpenClaw users

If you’re evaluating AI agent platforms right now, MCP support is probably on your checklist. Most platforms have it or are adding it. The question isn’t whether MCP is supported. It’s how.

Does it run locally or in the cloud? Is it sandboxed or unrestricted? Can you manage it from a UI or do you need to edit YAML files? Does the platform ship improvements weekly or quarterly?

We shipped MCP servers, locale awareness, stop inference, memory settings, schedule fixes, agent intelligence upgrades, a video pipeline, and billing improvements — in six days. Version 2.2.0 to 2.19.0.

That’s not a release cycle. That’s a Tuesday through Sunday.

What shipped in 6 days

  1. 1

    MCP servers as first-class entities

    Standalone MCP servers with dedicated UI tab, # autocomplete in chat, and DuckDuckGo web search built in.

  2. 2

    Locale and timezone awareness

    Persisted user locale injected into agent prompts. Browser timezone sent via header for time-sensitive tasks.

  3. 3

    Profile and memory settings

    New settings page where users control what their agent remembers across sessions.

  4. 4

    Stop inference on demand

    Halt a running LLM response mid-stream. Message queue ensures nothing gets lost.

  5. 5

    Video production pipeline

    Automated subtitles, logo intros, thumbnails, and YouTube upload — built for internal content creation.

  6. 6

    Schedule execution hardening

    Fixed infinite loops, TTL index conflicts, and added feeds-with-schedules endpoint.

  7. 7

    Agent intelligence upgrade

    Workspace-first prompt rules so the agent checks your files before searching the web.

  8. 8

    Billing domain matching

    Subdomain-based pre-approval for enterprise billing flows.

Questions about MCP servers and shipping speed

What are MCP servers and why do they matter?

MCP (Model Context Protocol) is a standard for connecting AI agents to external tools and data sources. Instead of building custom integrations for every service, MCP lets you plug in any compatible server — web search, databases, APIs — and your agent can use it immediately. It's becoming the default way AI agents talk to the outside world.

How does LikeClaw's MCP support compare to OpenClaw's?

OpenClaw supports MCP servers but requires local installation, manual configuration, and runs everything on your machine without sandboxing. LikeClaw runs MCP servers as first-class entities in the cloud, inside sandboxed environments, with a dedicated UI for management. You don't install anything locally.

Can I add my own MCP servers?

Yes. The Skills page now has a dedicated MCP Servers tab where you can add, configure, and manage servers. You can also reference them in chat using # autocomplete, just like skills.

Why ship 17 releases in 6 days instead of batching them?

Small, frequent releases mean each change is tested in production quickly and problems are isolated. A single large release hides bugs behind other bugs. We'd rather ship a fix in an hour than batch it into a release next week.