On Tuesday, Jensen Huang stood on stage at the Morgan Stanley TMT Conference and said something remarkable.

“OpenClaw is probably the single most important release of software, probably ever. If you look at the adoption of it, Linux took some 30 years to reach this level. OpenClaw, in three weeks, has now surpassed Linux.”

He called the adoption curve “straight up” and “vertical,” even on a logarithmic scale. He described agents that read tool manuals on the fly, conduct research autonomously, iterate on code without human supervision, and run persistently in the background. He outlined a world where every software company becomes an “agentic company.”

He is not wrong about any of that.

But between the standing ovation and a working agent, there is a gap that nobody on stage talked about. Tens of thousands of people have already fallen into it.

What Jensen said

Huang’s thesis is straightforward and it is good for NVIDIA. Standard AI prompts produce a single response. Agentic tasks use roughly 1,000 times more tokens. Persistent background agents, the kind OpenClaw enables, use roughly 1,000,000 times more tokens. That is a structural “compute vacuum” that means GPU demand will outstrip supply for years.

He is right. OpenClaw agents are not one-shot chatbots. They loop. They observe, reason, act, observe again. A single user prompt can trigger hundreds of LLM calls. Multiply that by millions of agents running 24/7 and you get the kind of compute demand that makes an NVIDIA CEO very happy.

He also backed up the adoption numbers. OpenClaw hit 250,000 GitHub stars in four months, surpassing React to become the most-starred software project on GitHub. It reached 190,000 stars in its first 14 days, the fastest-growing repository in the platform’s history.

Those numbers are real. The demand is real. The technology works.

But adoption speed and production readiness are not the same thing.

The part nobody mentioned

Security researchers have been tracking the fallout. Bitsight published a report on exposed OpenClaw instances documenting tens of thousands sitting open on the public internet, most with weak or missing authentication. Anyone could access the agent, its connected accounts, its API keys, and its full shell access.

That is not a handful of misconfigured servers. That is a systemic pattern.

The same week, researchers catalogued over 800 malicious skills in ClawHub, the official skill marketplace. That represented roughly 20% of the entire registry at the time. Some of those skills exfiltrated API keys. Others established reverse shells. At least one injected itself into the agent’s system prompt to persist across restarts.

And in early February, researchers disclosed six new vulnerabilities in rapid succession, including CVE-2026-25253: a one-click remote code execution vulnerability with a CVSS score of 8.8. If your agent clicked a crafted link sent in a message, an attacker had shell access to your server.

None of this came up at the Morgan Stanley conference. It did not need to. Huang was there to talk about compute demand, not operational security. But the people who left that talk excited enough to deploy their first agent needed to hear it.

Why this keeps happening

The pattern is predictable. OpenClaw is designed to be easy to install. Follow the quickstart, pass in your API key and a channel token, and you have a working agent in minutes. The problem is that “working” and “secure” require different levels of effort.

OpenClaw ships with weak authentication defaults. That is a design choice that optimizes for first-run experience at the cost of security. When someone follows the quickstart guide on a VPS, they get a working agent with an open web interface, an open API, and shell access to the host. Nothing in the default flow tells them to enable auth, configure TLS, or restrict network access.

Add browser automation and the surface area grows. Chromium needs shared memory, sandbox configuration, and enough RAM to avoid OOM kills. Most VPS guides skip these details. The result: agents that crash silently, restart in a degraded state, and accumulate orphaned processes.

Then there are the ongoing responsibilities. Updating when CVEs drop. Monitoring for runaway API loops that burn through hundreds of dollars overnight. Backing up the workspace directory that holds your agent’s memory and configuration. Auditing skills before installing them.

Jensen Huang described a world where agents “read the manual of the tool” and figure things out autonomously. That is accurate for the agent. It is not accurate for the person responsible for keeping the agent alive, patched, and safe.

The compute vacuum is real, but so is the ops vacuum

Huang’s “compute vacuum” framing is useful. Here is the corollary he did not mention: there is also an ops vacuum.

Every one of those millions of agents needs infrastructure. Not just a server, but TLS termination, authentication, network isolation, automated updates, health monitoring, cost controls, and backup systems. That is a full-time job for one agent. For an organization running five or ten, it is a team.

The irony is that the people most excited about OpenClaw after Jensen’s speech are the ones least likely to have that infrastructure. They heard “most important software ever” and went straight to the VPS quickstart guide. Some of them are now part of that exposed-instances number.

What we built instead

We run OpenClaw agents for a living. Every agent deployed through OpenClaw.rocks runs on our Kubernetes infrastructure with the open-source operator we built specifically for this problem.

Every agent gets:

  • Authentication by default. Gateway auth with signed cookies, zero exposed ports. There is no “disable auth” option because there should not be one.
  • Network isolation. Each agent runs in its own namespace with default-deny egress. No lateral movement, no access to other tenants.
  • Automated security patches. When a CVE drops, we roll the update across all agents. You do not need to pull a new image, test it, restart your container, and hope nothing breaks.
  • Resource limits and cost controls. Guaranteed QoS with CPU and memory limits. No runaway loops draining your API budget at 3 AM.
  • Health monitoring. Liveness and readiness probes. If an agent crashes, it restarts automatically. You find out through a status page, not through silence.
  • Browser automation that works. Chromium runs as a dedicated sidecar with its own resource allocation, proper shared memory, and lifecycle management. No OOM kills, no orphaned processes, no snap conflicts.

Jensen Huang is right that OpenClaw is transformative software. He is right that every company will want AI agents. He is right that the demand for compute will be enormous.

But compute is the easy part. NVIDIA will sell the GPUs. The hard part is everything between the GPU and a working, secure, reliable agent that does not leak your API keys to the internet.

That is the part we handle.

Get yours

If Jensen Huang convinced you that you need an OpenClaw agent, he was right. If you are about to spin up a $5 VPS to run one, stop.

Get a managed agent that is secure by default, updated automatically, and running on infrastructure built specifically for this. Your agent should be reading tool manuals and doing work for you, not sitting on an open port waiting for someone to find it.