Author: openx_editor

  • Anthropic’s Claude Now Controls Your Mac: The Dawn of True AI Agents

    In a move that fundamentally redefines what an AI assistant can do, Anthropic has launched a major update that gives its Claude chatbot the ability to directly control a user’s Mac — clicking buttons, opening applications, typing into fields, and navigating software on the user’s behalf while they step away from their desk.

    The update, available immediately as a research preview for paying subscribers, transforms Claude from a conversational assistant into something closer to a remote digital operator. It arrives inside both Claude Cowork, the company’s agentic productivity tool, and Claude Code, its developer-focused command-line agent.

    Anthropic is also extending Dispatch — a feature introduced last week that lets users assign Claude tasks from a mobile phone — into Claude Code for the first time, creating an end-to-end pipeline where a user can issue instructions from anywhere and return to a finished deliverable.

    How Computer Use Works

    The computer use feature works through a layered priority system. When a user assigns Claude a task, it first checks whether a direct connector exists — integrations with services like Gmail, Google Drive, Slack, or Google Calendar. These connectors are the fastest and most reliable path. If no connector is available, Claude falls back to navigating the Chrome browser. Only as a last resort does Claude interact directly with the user’s screen — clicking, typing, scrolling, and opening applications the way a human operator would.

    This hierarchy matters. As Anthropic’s documentation explains, “pulling messages through your Slack connection takes seconds, but navigating Slack through your screen takes much longer and is more error-prone.”

    Dispatch: Your iPhone as a Remote Control

    The real strategic play may not be computer use itself but how Anthropic is pairing it with Dispatch. Dispatch creates a persistent, continuous conversation between Claude on your phone and Claude on your desktop. A user pairs their mobile device with their Mac by scanning a QR code, and from that point forward, they can text Claude instructions from anywhere.

    The use cases Anthropic envisions range from mundane to ambitious: having Claude check your email every morning, pull weekly metrics into a report template, organize a cluttered Downloads folder, or even compile a competitive analysis from local files into a formatted document.

    The Competitive Landscape

    Anthropic’s timing is not accidental. The company is shipping computer use capabilities into a market that has been rapidly reshaped by the viral rise of OpenClaw, the open-source framework that enables AI models to autonomously control computers. OpenClaw exploded earlier this year and proved that users wanted AI agents capable of taking real actions on their computers.

    Anthropic is now entering a market that the open-source community essentially created, betting that its advantages — tighter integration, a consumer-friendly interface, and an existing subscriber base — can compete with free.

    Security Considerations

    The announcement has naturally raised security concerns. When Claude interacts with the screen, it takes screenshots of the user’s desktop to understand what it’s looking at. That means Claude can see anything visible on the screen, including personal data, sensitive documents, or private information.

    Anthropic has built several layers of defense. Claude requests permission before accessing each application. Some sensitive apps — investment platforms, cryptocurrency tools — are blocked by default. Users can maintain a blocklist of applications Claude is never allowed to touch. The system scans for signs of prompt injection during computer use sessions. And users can stop Claude at any point.

    But the company is remarkably forthright about the limits of these protections. “Computer use is still early compared to Claude’s ability to code or interact with text,” Anthropic’s blog post states. “Claude can make mistakes, and while we continue to improve our safeguards, threats are constantly evolving.”

    The Bottom Line

    The new features are available to Claude Pro subscribers (starting at $17 per month) and Max subscribers ($100 or $200 per month), but only on macOS for now. Early hands-on testing suggests the feature works well for information retrieval and summarization but struggles with more complex, multi-step workflows.

    As one early user on social media captured the broader ambition: “combine this with /schedule and you’ve basically got a background worker that can interact with any app on a cron job. that’s not an AI assistant anymore, that’s infrastructure.”

    Whether consumers are ready to hand their computers over to an AI remains to be seen. But with this launch, Anthropic has made it clear: the era of AI agents that can actually do work is no longer a distant promise. It’s a present reality, available for download today.

  • Cloudflare’s Dynamic Workers Promise 100x Faster AI Agent Execution

    Web infrastructure giant Cloudflare has unveiled Dynamic Workers, a new lightweight, isolate-based sandboxing system that starts in milliseconds, uses only a few megabytes of memory, and can run on the same machine — even the same thread — as the request that created it.

    Compared with traditional Linux containers, the company says that makes Dynamic Workers roughly 100x faster to start and between 10x and 100x more memory efficient.

    The Problem with Containers

    Cloudflare’s argument is blunt: for “consumer-scale” agents, containers are too slow and too expensive. In the company’s framing, a container is fine when a workload persists, but it is a bad fit when an agent needs to run one small computation, return a result and disappear. Developers either keep containers warm, which costs money, or tolerate cold-start delay, which hurts responsiveness.

    Dynamic Worker Loader is Cloudflare’s answer. The API allows one Worker to instantiate another Worker at runtime with code provided on the fly, usually by a language model. Because these dynamic Workers are built on isolates, Cloudflare says they can be created on demand, run one snippet of code, and then be thrown away immediately afterward.

    Code Mode: From Tool Orchestration to Generated Logic

    The release makes the most sense in the context of Cloudflare’s larger Code Mode strategy. The idea is simple: instead of giving an agent a long list of tools and asking it to call them one by one, give it a programming surface and let it write a short TypeScript function that performs the logic itself.

    That means the model can chain calls together, filter data, manipulate files and return only the final result, rather than filling the context window with every intermediate step. Cloudflare says that cuts both latency and token usage, and improves outcomes especially when the tool surface is large.

    The company points to its own Cloudflare MCP server as proof of concept. Rather than exposing the full Cloudflare API as hundreds of individual tools, it says the server exposes the entire API through two tools — search and execute — in under 1,000 tokens because the model writes code against a typed API instead of navigating a long tool catalog.

    Security Remains the Hardest Part

    Cloudflare does not pretend this is easy to secure. In fact, the company explicitly says hardening an isolate-based sandbox is trickier than relying on hardware virtual machines, and notes that security bugs in V8 are more common than those in typical hypervisors.

    Cloudflare’s response is that it has nearly a decade of experience doing exactly that. The company points to automatic rollout of V8 security patches within hours, a custom second-layer sandbox, dynamic cordoning of tenants based on risk, and research into defenses against Spectre-style side-channel attacks.

    Isolates vs. MicroVMs: Two Different Homes for Agents

    Cloudflare’s launch highlights a growing split in the AI-agent market. One side emphasizes fast, disposable, web-scale execution. The other emphasizes deeper, more persistent environments with stronger machine-like boundaries.

    Docker Sandboxes offers a useful contrast. Rather than using standard containers alone, it uses lightweight microVMs to give each agent its own private Docker daemon. Cloudflare is optimizing for something different: short-lived, high-volume execution on the global web.

    Early Use Cases

    Cloudflare is pitching Dynamic Workers for much more than quick code snippets. One example is Zite, which is building an app platform where users interact through chat while the model writes TypeScript behind the scenes to build CRUD apps, connect to services like Stripe, Airtable and Google Calendar, and run backend logic. Cloudflare says Zite now handles “millions of execution requests daily” using the system.

    Pricing and Availability

    Dynamic Worker Loader is now in open beta and available to all users on the Workers Paid plan. Cloudflare says dynamically loaded Workers are priced at $0.002 per unique Worker loaded per day, in addition to standard CPU and invocation charges, though that per-Worker fee is waived during the beta period.

    Cloudflare’s launch lands at a moment when AI infrastructure is becoming more opinionated. Some vendors are leaning toward long-lived agent environments. Cloudflare is taking the opposite angle: for many workloads, the right agent runtime is not a persistent container or a tiny VM, but a fast, disposable isolate that appears instantly, executes one generated program, and vanishes.

  • Anthropic’s Claude Now Controls Your Mac: The Dawn of True AI Agents

    In a move that fundamentally redefines what an AI assistant can do, Anthropic has launched a major update that gives its Claude chatbot the ability to directly control a user’s Mac — clicking buttons, opening applications, typing into fields, and navigating software on the user’s behalf while they step away from their desk.

    The update, available immediately as a research preview for paying subscribers, transforms Claude from a conversational assistant into something closer to a remote digital operator. It arrives inside both Claude Cowork, the company’s agentic productivity tool, and Claude Code, its developer-focused command-line agent.

    Anthropic is also extending Dispatch — a feature introduced last week that lets users assign Claude tasks from a mobile phone — into Claude Code for the first time, creating an end-to-end pipeline where a user can issue instructions from anywhere and return to a finished deliverable.

    How Computer Use Works

    The computer use feature works through a layered priority system. When a user assigns Claude a task, it first checks whether a direct connector exists — integrations with services like Gmail, Google Drive, Slack, or Google Calendar. These connectors are the fastest and most reliable path. If no connector is available, Claude falls back to navigating the Chrome browser. Only as a last resort does Claude interact directly with the user’s screen — clicking, typing, scrolling, and opening applications the way a human operator would.

    This hierarchy matters. As Anthropic’s documentation explains, “pulling messages through your Slack connection takes seconds, but navigating Slack through your screen takes much longer and is more error-prone.”

    Dispatch: Your iPhone as a Remote Control

    The real strategic play may not be computer use itself but how Anthropic is pairing it with Dispatch. Dispatch creates a persistent, continuous conversation between Claude on your phone and Claude on your desktop. A user pairs their mobile device with their Mac by scanning a QR code, and from that point forward, they can text Claude instructions from anywhere.

    The use cases Anthropic envisions range from mundane to ambitious: having Claude check your email every morning, pull weekly metrics into a report template, organize a cluttered Downloads folder, or even compile a competitive analysis from local files into a formatted document.

    The Competitive Landscape

    Anthropic’s timing is not accidental. The company is shipping computer use capabilities into a market that has been rapidly reshaped by the viral rise of OpenClaw, the open-source framework that enables AI models to autonomously control computers. OpenClaw exploded earlier this year and proved that users wanted AI agents capable of taking real actions on their computers.

    Anthropic is now entering a market that the open-source community essentially created, betting that its advantages — tighter integration, a consumer-friendly interface, and an existing subscriber base — can compete with free.

    Security Considerations

    The announcement has naturally raised security concerns. When Claude interacts with the screen, it takes screenshots of the user’s desktop to understand what it’s looking at. That means Claude can see anything visible on the screen, including personal data, sensitive documents, or private information.

    Anthropic has built several layers of defense. Claude requests permission before accessing each application. Some sensitive apps — investment platforms, cryptocurrency tools — are blocked by default. Users can maintain a blocklist of applications Claude is never allowed to touch. The system scans for signs of prompt injection during computer use sessions. And users can stop Claude at any point.

    But the company is remarkably forthright about the limits of these protections. “Computer use is still early compared to Claude’s ability to code or interact with text,” Anthropic’s blog post states. “Claude can make mistakes, and while we continue to improve our safeguards, threats are constantly evolving.”

    The Bottom Line

    The new features are available to Claude Pro subscribers (starting at $17 per month) and Max subscribers ($100 or $200 per month), but only on macOS for now. Early hands-on testing suggests the feature works well for information retrieval and summarization but struggles with more complex, multi-step workflows.

    As one early user on social media captured the broader ambition: “combine this with /schedule and you’ve basically got a background worker that can interact with any app on a cron job. that’s not an AI assistant anymore, that’s infrastructure.”

    Whether consumers are ready to hand their computers over to an AI remains to be seen. But with this launch, Anthropic has made it clear: the era of AI agents that can actually do work is no longer a distant promise. It’s a present reality, available for download today.

  • Anthropic’s Claude Now Controls Your Mac: The Dawn of True AI Agents

    In a move that fundamentally redefines what an AI assistant can do, Anthropic has launched a major update that gives its Claude chatbot the ability to directly control a user’s Mac — clicking buttons, opening applications, typing into fields, and navigating software on the user’s behalf while they step away from their desk.

    The update, available immediately as a research preview for paying subscribers, transforms Claude from a conversational assistant into something closer to a remote digital operator. It arrives inside both Claude Cowork, the company’s agentic productivity tool, and Claude Code, its developer-focused command-line agent.

    Anthropic is also extending Dispatch — a feature introduced last week that lets users assign Claude tasks from a mobile phone — into Claude Code for the first time, creating an end-to-end pipeline where a user can issue instructions from anywhere and return to a finished deliverable.

    How Computer Use Works

    The computer use feature works through a layered priority system. When a user assigns Claude a task, it first checks whether a direct connector exists — integrations with services like Gmail, Google Drive, Slack, or Google Calendar. These connectors are the fastest and most reliable path. If no connector is available, Claude falls back to navigating the Chrome browser. Only as a last resort does Claude interact directly with the user’s screen — clicking, typing, scrolling, and opening applications the way a human operator would.

    This hierarchy matters. As Anthropic’s documentation explains, “pulling messages through your Slack connection takes seconds, but navigating Slack through your screen takes much longer and is more error-prone.”

    Dispatch: Your iPhone as a Remote Control

    The real strategic play may not be computer use itself but how Anthropic is pairing it with Dispatch. Dispatch creates a persistent, continuous conversation between Claude on your phone and Claude on your desktop. A user pairs their mobile device with their Mac by scanning a QR code, and from that point forward, they can text Claude instructions from anywhere.

    The use cases Anthropic envisions range from mundane to ambitious: having Claude check your email every morning, pull weekly metrics into a report template, organize a cluttered Downloads folder, or even compile a competitive analysis from local files into a formatted document.

    The Competitive Landscape

    Anthropic’s timing is not accidental. The company is shipping computer use capabilities into a market that has been rapidly reshaped by the viral rise of OpenClaw, the open-source framework that enables AI models to autonomously control computers. OpenClaw exploded earlier this year and proved that users wanted AI agents capable of taking real actions on their computers.

    Anthropic is now entering a market that the open-source community essentially created, betting that its advantages — tighter integration, a consumer-friendly interface, and an existing subscriber base — can compete with free.

    Security Considerations

    The announcement has naturally raised security concerns. When Claude interacts with the screen, it takes screenshots of the user’s desktop to understand what it’s looking at. That means Claude can see anything visible on the screen, including personal data, sensitive documents, or private information.

    Anthropic has built several layers of defense. Claude requests permission before accessing each application. Some sensitive apps — investment platforms, cryptocurrency tools — are blocked by default. Users can maintain a blocklist of applications Claude is never allowed to touch. The system scans for signs of prompt injection during computer use sessions. And users can stop Claude at any point.

    But the company is remarkably forthright about the limits of these protections. “Computer use is still early compared to Claude’s ability to code or interact with text,” Anthropic’s blog post states. “Claude can make mistakes, and while we continue to improve our safeguards, threats are constantly evolving.”

    The Bottom Line

    The new features are available to Claude Pro subscribers (starting at $17 per month) and Max subscribers ($100 or $200 per month), but only on macOS for now. Early hands-on testing suggests the feature works well for information retrieval and summarization but struggles with more complex, multi-step workflows.

    As one early user on social media captured the broader ambition: “combine this with /schedule and you’ve basically got a background worker that can interact with any app on a cron job. that’s not an AI assistant anymore, that’s infrastructure.”

    Whether consumers are ready to hand their computers over to an AI remains to be seen. But with this launch, Anthropic has made it clear: the era of AI agents that can actually do work is no longer a distant promise. It’s a present reality, available for download today.

  • ruflo: The Open-Source Agent Orchestration Platform That’s Turning Claude Into Infrastructure

    ruflo: The Open-Source Agent Orchestration Platform That’s Turning Claude Into Infrastructure

    While the big AI labs fight over foundation models, a new category of tooling is emerging to coordinate what those models actually do once they’re deployed. ruflo — currently trending on GitHub — is positioning itself as the leading agent orchestration platform for Claude, and it’s attracting serious attention from developers tired of stitching together fragile chains of prompts.

    The project, from ruvnet, describes itself as an enterprise-grade platform for deploying intelligent multi-agent swarms, coordinating autonomous workflows, and building conversational AI systems. It’s built with TypeScript and integrates deeply with both Claude Code and Codex.

    What Agent Orchestration Actually Means

    If you’ve been following the AI agent space, you’ve probably encountered a frustrating pattern: take an LLM, give it a set of tools, and hope it figures out when to use which one. That approach works for demos. It falls apart in production.

    Agent orchestration platforms like ruflo take a different approach. Instead of relying on a single model to decide everything, they decompose complex tasks across multiple specialized agents, each with defined roles, tools, and escalation paths. A research task might involve one agent for web search, another for document synthesis, another for fact-checking. A coding task might have agents specialized in reading, writing, testing, and deployment.

    The orchestration layer — what ruflo is building — coordinates these agents, manages their communication, handles failures, and ensures that the right information flows to the right agent at the right time.

    Key Features of ruflo

    ruflo’s GitHub description highlights several capabilities that distinguish it from simpler agent frameworks. Distributed swarm intelligence suggests the platform can coordinate agents across multiple machines or processes, enabling parallelism and fault tolerance. RAG integration means agents can retrieve relevant context from large document stores before responding — critical for enterprise use cases where the model’s training data doesn’t include proprietary information.

    The native Claude Code and Codex integration is particularly interesting. Rather than building its own model interaction layer from scratch, ruflo leverages the tooling that Anthropic and OpenAI have already built for their coding agents. That suggests a platform designed to extend existing investments rather than replace them.

    The Enterprise Bet

    The timing of ruflo’s traction is notable. We’re entering a phase where enterprises are moving past the question of whether to use AI agents and onto the harder question of how to deploy them reliably. Simple prompt chains don’t scale. Human-in-the-loop approval processes don’t work when you’re running thousands of tasks. What the enterprise market is starting to demand is infrastructure — the kind of reliable, observable, controllable systems that have characterized enterprise software for decades.

    ruflo appears to be one of several bets that the answer lies in sophisticated orchestration layers that treat individual model calls as commodities and competitive differentiation as something that lives above the model layer. Whether ruflo specifically becomes the standard or simply informs what a future standard might look like, it’s worth watching.

    The project is open source and available on GitHub, where it currently has over 24,000 stars and continues to attract contributors. For developers building serious agentic workflows, it’s worth a look.

  • Anthropic’s Claude Can Now Control Your Mac: The Agent Era Just Got Real

    Anthropic’s Claude Can Now Control Your Mac: The Agent Era Just Got Real

    Anthropic has just made the abstract concept of AI agents viscerally concrete. On Monday, the company launched the ability for Claude to directly control a user’s Mac — clicking buttons, opening applications, typing into fields, and navigating software on the user’s behalf while they step away from their desk.

    Available immediately as a research preview for paying subscribers (Claude Pro at /month or Max at -/month), the feature transforms Claude from a conversational assistant into something closer to a remote digital operator. And it’s available inside both Claude Cowork — the company’s agentic productivity tool — and Claude Code, its developer-focused command-line agent.

    How Computer Use Actually Works

    The system operates through a layered priority hierarchy that reveals how Anthropic is thinking about reliability versus reach. When a user assigns Claude a task, it first checks whether a direct connector exists — integrations with Gmail, Google Drive, Slack, or Google Calendar. These connectors are the fastest and most reliable path. If no connector is available, Claude falls back to navigating Chrome via Anthropic’s browser extension. Only as a last resort does Claude interact directly with the user’s screen — clicking, typing, scrolling, and opening applications the way a human operator would.

    This hierarchy matters. As Anthropic’s documentation explains, pulling messages through your Slack connection takes seconds, but navigating Slack through your screen takes much longer and is more error-prone. Screen-level interaction is the most flexible mode — it can theoretically work with any application — but it’s also the slowest and most fragile.

    Dispatch: Your iPhone as a Remote Control

    The strategic play may not be computer use itself, but how Anthropic is pairing it with Dispatch — a feature that launched last week and now extends to Claude Code. Dispatch creates a persistent conversation between Claude on your phone and Claude on your desktop. A user pairs their mobile device with their Mac by scanning a QR code, and from that point forward, they can text Claude instructions from anywhere. Claude executes those instructions on the desktop — which must remain awake — and sends back the results.

    The use cases Anthropic envisions range from mundane to ambitious: having Claude check your email every morning, pull weekly metrics into a report, organize a cluttered Downloads folder, or compile a competitive analysis from local files into a formatted document. Scheduled tasks allow users to set a cadence once — every Friday, every morning — and let Claude handle the rest without further prompting.

    The Production Reality

    Anthropic is calling this a research preview for a reason. Early hands-on testing suggests the feature works well for information retrieval and summarization but struggles with more complex, multi-step workflows that require interacting with multiple applications. Screen-level interaction is inherently fragile — anything that changes the UI can derail a task. And the fact that Claude takes screenshots of your desktop to navigate raises obvious privacy considerations, even with Anthropic’s documented guardrails.

    But the trajectory is clear. One early user on social media put it well: combine this with scheduling and you’ve basically got a background worker that can interact with any app on a cron job. That’s not an AI assistant anymore, that’s infrastructure.

    The competition is heating up accordingly. Reuters reported that OpenAI is actively courting private equity firms in what it described as an enterprise turf war with Anthropic — a battle in which the ability to ship working agents is becoming the decisive weapon. With Claude now physically capable of operating your desktop, Anthropic has fired a significant shot.

  • Cloudflare Dynamic Workers: Isolate-Based AI Agent Runtime Promises 100x Speed Boost

    Cloudflare Dynamic Workers: Isolate-Based AI Agent Runtime Promises 100x Speed Boost

    Cloudflare has launched an open beta of Dynamic Workers, a lightweight isolate-based sandboxing system that starts in milliseconds, uses only a few megabytes of memory, and can run on the same machine — even the same thread — as the request that created it.

    In plain terms: Cloudflare is arguing that containers are the wrong tool for AI agent workloads, and it has the benchmarks to back up the claim.

    Why Containers Are the Wrong Fit

    Containers solve a real portability problem. Package your code, libraries, and settings into a unit that runs consistently everywhere. But Cloudflare says containers generally take hundreds of milliseconds to boot and consume hundreds of megabytes of memory. For an AI-generated task that needs to execute for a moment, return a result, and disappear, that’s expensive and slow.

    The alternative is isolates — a concept Google introduced in 2011 with the V8 JavaScript engine. Instead of spinning up a full container, you create a lightweight execution compartment within the same process. Cloudflare adapted this for the cloud in 2017 with Workers, and now it’s applying that architecture to AI agents.

    Dynamic Workers: The Technical Details

    Dynamic Worker Loader is the new API that lets one Worker instantiate another Worker at runtime, with code provided on the fly by a language model. Because these dynamic Workers are built on isolates, they can be created on demand, run a snippet of code, and be thrown away immediately after. In many cases, they run on the same machine and even the same thread as the Worker that created them.

    Compared with traditional Linux containers, Cloudflare says Dynamic Workers are roughly 100x faster to start and between 10x and 100x more memory efficient. For consumer-scale AI agents, that’s not a marginal improvement — it’s a different economic equation.

    The Security Question

    Cloudflare doesn’t pretend this is easy to secure. The company explicitly acknowledges that hardening an isolate-based sandbox is trickier than relying on hardware virtual machines. Its counterargument is nearly a decade of experience making isolate-based multi-tenancy safe for the public web — automatic V8 security patches within hours, a custom second-layer sandbox, and defenses against Spectre-style side-channel attacks.

    Code Mode: The Bigger Picture

    Cloudflare has spent months promoting what it calls Code Mode — the idea that LLMs often perform better when given an API and asked to write code against it, rather than being forced through tool calls. Converting an MCP server into a TypeScript API can cut token usage by 81%. Dynamic Workers is the secure execution layer that makes that approach practical at scale.

    Whether isolate-based sandboxing is secure enough for untrusted AI-generated code remains an open question. But Cloudflare’s Dynamic Workers represent the most serious challenge yet to the container-centric view of AI agent infrastructure.