Building a Skills Marketplace That Doesn't Ship Malware
341 malicious skills were found on OpenClaw's marketplace. Here's how we built ours to make that impossible.
A marketplace built on trust
0
Malicious skills found
100%
Skills reviewed before publishing
0%
Skills with prompt injection
341 reasons to build it differently
In February 2026, Snyk published research that shook the AI agent community: 341 malicious skills were found on OpenClaw’s ClawHub marketplace. 36% of all skills contained prompt injection vulnerabilities. 335 of the malicious skills installed macOS stealer malware — specifically Atomic Stealer (AMOS).
Let that sink in. Users downloaded skills that they thought would improve their AI agent. Instead, those skills installed malware that stole their credentials.
This wasn’t a theoretical risk. It was 341 confirmed cases of malicious software distributed through a marketplace that millions of developers trust.
When we built LikeClaw’s skills marketplace, we had those 341 cases taped to our monitors.
Why OpenClaw’s marketplace is broken by design
The problem with OpenClaw’s marketplace isn’t that they don’t check for malware (though they don’t). The problem is architectural.
OpenClaw skills run on the user’s local machine with the user’s permissions. A skill that claims to “format JSON” has the same access as any other program on your computer: it can read files, make network requests, access credentials, and install software.
There’s no sandbox. No isolation. No containment. When you install an OpenClaw skill, you’re giving it the keys to your entire system.
Even with perfect review processes, this architecture is fundamentally unsafe. A skill that was safe when it was published can be updated with malicious code. A skill that does legitimate work can also exfiltrate data in the background. A skill’s dependencies can be compromised through supply chain attacks.
The only way to make a skill marketplace safe is to ensure that skills can’t do damage even if they try.
How we built ours
Our skills marketplace launched in February 2026. It was built on three principles:
Principle 1: Review before publish. Every skill goes through an approval workflow before it’s available to users. This isn’t just a checkbox — the review checks for malicious code patterns, prompt injection techniques, and compatibility with sandboxed execution.
We built a complete approval system: skills are submitted, reviewed, and either approved or rejected with feedback. Categories organize skills by purpose. Pagination handles scale. The review process is designed to be thorough, not fast.
Principle 2: Sandbox everything. Skills run inside E2B sandboxes. Always. A skill that tries to access the user’s file system hits the sandbox boundary. A skill that tries to install software installs it inside a container that will be destroyed. A skill that tries to exfiltrate data can only send what’s inside the sandbox.
This is the fundamental architectural difference. OpenClaw skills run on your machine. LikeClaw skills run in a container. The blast radius of a malicious skill on OpenClaw is your entire system. The blast radius on LikeClaw is a temporary container that’s about to be destroyed.
Principle 3: Detect incompatibilities. Many OpenClaw skills assume they have raw system access. They read from ~/.ssh/, they access system databases, they call local APIs. These skills are inherently incompatible with sandboxed execution.
When users import skills from ClawHub, we automatically detect these incompatibilities. Skills that require raw system access are flagged. Users see exactly what won’t work and why. No silent failures. No “this skill doesn’t work and we don’t know why.”
The ClawHub import bridge
We didn’t ignore the OpenClaw ecosystem. Many ClawHub skills are legitimate, well-built tools created by talented developers. We built an import bridge that lets users bring those skills to LikeClaw.
The import process:
- Select a ClawHub skill to import
- We analyze it for E2B compatibility
- Incompatible features are flagged
- Compatible skills are imported and queued for review
- After review, the skill is available in your workspace
This gives users access to the best of the OpenClaw ecosystem without the worst of its security model.
What a vetted marketplace means for users
When you install a skill from LikeClaw’s marketplace, you know:
- A human reviewed it before it was published
- It runs in an isolated sandbox, not on your machine
- It can’t access anything outside its container
- It was checked for prompt injection patterns
- If it’s imported from ClawHub, incompatibilities were flagged
Compare that to installing a skill from OpenClaw’s marketplace, where Snyk found that more than one in three contains prompt injection, and 335 known skills installed credential-stealing malware.
This isn’t about being better engineers than the OpenClaw team. It’s about choosing a different architecture. An architecture where security isn’t a feature to be added later — it’s the foundation everything else is built on.
0 is the only acceptable number
341 malicious skills. That’s OpenClaw’s number. Ours is 0.
Not because we’re more vigilant. Not because our review process is perfect. But because our architecture makes malicious skills toothless. A skill that tries to steal credentials inside a sandbox steals nothing. A skill that tries to install malware installs it in a container that will be destroyed in seconds.
Architecture beats vigilance. Every time.
We built a skills marketplace that doesn’t ship malware. Not because we’re careful. Because we made it architecturally impossible.
Two approaches to skill marketplaces
| LikeClaw Skills | OpenClaw Marketplace | |
|---|---|---|
| Security review | Mandatory before publishing | None — open submission |
| Execution environment | Sandboxed container | User's local machine |
| Malicious skills found | 0 | 341+ (Snyk, 2026) |
| Prompt injection rate | 0% | 36% (Snyk) |
| Import from ClawHub | With compatibility check | Native |
Source: Snyk Research, February 2026
Questions about our skills marketplace
Can anyone publish a skill?
Anyone can submit a skill, but it goes through a review process before it's available to users. We check for malicious code, prompt injection, and compatibility with our sandboxed execution environment.
Can I import skills from ClawHub/OpenClaw?
Yes, with a compatibility check. Some OpenClaw skills require raw system access that isn't available in our sandboxed environment. We detect these incompatibilities during import and flag them clearly.
What happens if a skill tries to do something malicious?
Two layers of protection: the review process catches it before publishing, and the sandbox catches it at runtime. Even if a malicious skill somehow got through review, it runs inside an isolated E2B container with no access to your actual system.