Cloudflare has just fundamentally changed the game for AI agent deployment with the open beta release of Dynamic Workers, a lightweight, isolate-based sandboxing system that starts in milliseconds, uses only a few megabytes of memory, and can run on the same machine??ven the same thread??s the request that created it. Compared with traditional Linux containers, Cloudflare says Dynamic Workers is roughly 100x faster to start and between 10x and 100x more memory efficient.
Why Containers Are the Wrong Tool for AI Agents
For years, containers have been the default choice for deploying applications in the cloud. Docker popularized them in 2013, and they solved a critical portability problem by letting developers package code, libraries, and settings into a predictable unit that could run consistently across systems. But Cloudflare argues that containers are too heavy for a new class of workloads: short-lived, AI-generated code executions.
Think about what an AI agent does. It receives a request, writes a small piece of code to accomplish that task, executes it, and returns the result. That entire interaction might take seconds. Traditional containers, which take hundreds of milliseconds to boot and hundreds of megabytes of memory, are absurdly over-provisioned for such fleeting tasks. Keeping containers warm costs money; letting them go cold introduces latency.
Isolates: A Better Model for Agent Workloads
Cloudflare’s solution draws on a different model: the isolate. Google introduced the V8 Isolate API in 2011 so the JavaScript engine could run many separate execution contexts efficiently inside the same process. In 2017, Cloudflare adapted this browser-born idea for the cloud with Workers, betting that the traditional cloud stack was too slow for instant, globally distributed web tasks.
Dynamic Workers takes this further. The API allows one Worker to instantiate another Worker at runtime with code provided on the fly??sually by a language model. Because these dynamic Workers are built on isolates, Cloudflare says they can be created on demand, run one snippet of code, and then be thrown away immediately afterward. In many cases, they run on the same machine and even the same thread as the Worker that created them, removing the need to hunt for a warm sandbox somewhere else on the network.
“For consumer-scale agents, containers are too slow and too expensive,” Cloudflare explains. “Dynamic Workers inherit the same platform characteristics that already let Workers scale to millions of requests per second.”
Code Mode: The Smarter Way to Power AI Agents
Dynamic Workers makes the most sense in the context of Cloudflare’s larger Code Mode strategy. The idea is elegant: instead of giving an agent a long list of tools and asking it to call them one by one, give it a programming surface and let it write a short TypeScript function that performs the logic itself.
This means the model can chain calls together, filter data, manipulate files, and return only the final result??ather than filling the context window with every intermediate step. Cloudflare says that cuts both latency and token usage, and improves outcomes especially when the tool surface is large.
The company points to its own Cloudflare MCP server as proof of concept. Rather than exposing the full Cloudflare API as hundreds of individual tools, the server exposes the entire API through two tools??strong>search and execute??n under 1,000 tokens because the model writes code against a typed API instead of navigating a long tool catalog. This approach cuts token usage by 81%.
Why TypeScript Beats HTTP for Agents
One of the more interesting aspects of this launch is Cloudflare’s argument for a different interface layer. MCP defines schemas for flat tool calls but not for programming APIs. OpenAPI can describe REST APIs, but it is verbose both in schema and in usage. TypeScript, by contrast, is concise, widely represented in model training data, and can communicate an API’s shape in far fewer tokens.
Cloudflare says the Workers runtime can automatically establish a Cap’n Web RPC bridge between the sandbox and the harness code, so a dynamic Worker can call those typed interfaces across the security boundary as if it were using a local library. This lets developers expose only the exact capabilities they want an agent to have.
Security in the Isolate Model
Cloudflare does not pretend this is easy to secure. The company explicitly acknowledges that hardening an isolate-based sandbox is trickier than relying on hardware virtual machines, and notes that security bugs in V8 are more common than those in typical hypervisors.
But Cloudflare argues it has nearly a decade of experience making isolate-based multi-tenancy safe. The company points to automatic rollout of V8 security patches within hours, a custom second-layer sandbox, dynamic cordoning of tenants based on risk, extensions to the V8 sandbox using hardware features like MPK, and research into defenses against Spectre-style side-channel attacks.
The Bigger Picture
Cloudflare is effectively trying to turn sandboxing itself into a strategic layer in the AI stack. If agents increasingly generate small pieces of code on the fly to retrieve data, transform files, call services, or automate workflows, then the economics and safety of the runtime matter almost as much as the capabilities of the model.
The company is not claiming containers disappear. It is claiming that for a growing class of web-scale, short-lived AI-agent workloads, the default box has been too heavy, and the isolate may now be the better fit. With Dynamic Workers entering open beta, we are about to find out if the market agrees.