AI Agents, AI News

Anthropic’s Claude Can Now Control Your Mac: The Future of AI Agents Gets Real

Anthropic on Monday launched the most ambitious consumer AI agent to date, giving its Claude chatbot the ability to directly control a user’s Mac — clicking buttons, opening applications, typing into fields, and navigating software on the user’s behalf while they step away from their desk.

The update, available immediately as a research preview for paying subscribers, transforms Claude from a conversational assistant into something closer to a remote digital operator. The move thrusts Anthropic into the center of the most heated competition in artificial intelligence: the scramble to build agents that can act, not just talk.

How Computer Use Works: A Layered Priority System

The computer use feature works through a layered priority system that reveals how Anthropic is thinking about reliability versus reach. When a user assigns Claude a task, it first checks whether a direct connector exists — integrations with services like Gmail, Google Drive, Slack, or Google Calendar. These connectors are the fastest and most reliable path to completing a task.

If no connector is available, Claude falls back to navigating the Chrome browser via Anthropic’s Claude for Chrome extension. Only as a last resort does Claude interact directly with the user’s screen — clicking, typing, scrolling, and opening applications the way a human operator would.

This hierarchy matters. As Anthropic’s help center documentation explains, “pulling messages through your Slack connection takes seconds, but navigating Slack through your screen takes much longer and is more error-prone.” Screen-level interaction is the most flexible mode — it can theoretically work with any application — but it is also the slowest and most fragile.

When Claude does interact with the screen, it takes screenshots of the user’s desktop to understand what it’s looking at and determine how to navigate. That means Claude can see anything visible on the screen, including personal data, sensitive documents, or private information.

Dispatch: Your iPhone as a Remote Control for AI

The real strategic play may not be computer use itself but how Anthropic is pairing it with Dispatch — a feature introduced last week that lets users assign Claude tasks from a mobile phone, and which now extends to Claude Code.

Dispatch creates a persistent, continuous conversation between Claude on your phone and Claude on your desktop. A user pairs their mobile device with their Mac by scanning a QR code, and from that point forward, they can text Claude instructions from anywhere. Claude executes those instructions on the desktop — which must remain awake and running the Claude app — and sends back the results.

The use cases Anthropic envisions range from mundane to ambitious: having Claude check your email every morning, pull weekly metrics into a report template, organize a cluttered Downloads folder, or compile a competitive analysis from local files into a formatted document.

Early Testing: It Works About Half the Time

Anthropic is calling this a research preview for a reason. Early hands-on testing suggests the feature works well for information retrieval and summarization but struggles with more complex, multi-step workflows — particularly those that require interacting with multiple applications.

One detailed hands-on evaluation found that while Claude successfully located a specific screenshot on a Mac, summarized notes from Notion, added a URL to a database, and recalled a screenshot from earlier in the session, it failed to open the Shortcuts app, send a screenshot via iMessage, list unfinished Todoist tasks (due to an authorization error), or fetch a URL from Safari using AppleScript.

The verdict was measured: the feature is “not good enough to rely on when you’re away from your desk” but “a step in the right direction.”

The Competitive Landscape: OpenClaw, NemoClaw, and the Startup Swarm

Anthropic is shipping computer use capabilities into a market that the open-source community essentially created. OpenClaw — the open-source framework that enables AI models to autonomously control computers — exploded earlier this year and proved that users wanted AI agents capable of taking real actions on their computers. The framework spawned an entire ecosystem of derivative tools that turned autonomous computer control from a research curiosity into a product category almost overnight.

Nvidia entered the fray last week with NemoClaw, its own framework designed to simplify the setup and deployment of OpenClaw with added security controls. Anthropic is now entering a market that the open-source community created, betting that its advantages — tighter integration, a consumer-friendly interface, and an existing subscriber base — can compete with free.

Security Considerations: The Unsolved Problems

Computer use runs outside the virtual machine that Cowork normally uses for file operations and commands. That means Claude is interacting with the user’s actual desktop and applications — not an isolated sandbox. A misclick, a misunderstood instruction, or a prompt injection attack could have real consequences on a user’s live system.

Anthropic has built several layers of defense: Claude requests permission before accessing each application, some sensitive apps are blocked by default, users can maintain a blocklist, and the system scans for signs of prompt injection. But the company is remarkably forthright about the limits of these protections. “Computer use is still early compared to Claude’s ability to code or interact with text,” Anthropic’s blog post states. “Claude can make mistakes, and while we continue to improve our safeguards, threats are constantly evolving.”

For enterprise and team customers, there is an additional concern: enterprise features like audit logs, compliance APIs, and data exports do not currently capture Cowork activity. Organizations subject to regulatory oversight have no centralized record of what Claude did on a user’s machine — a gap that could be a dealbreaker for compliance-sensitive industries.

As one commenter on social media put it: “When the agent IS the user (same mouse, keyboard, screen), traditional forensic markers won’t distinguish human vs AI actions. How are we thinking about audit trails here?”

The question is not academic. As AI agents gain the ability to take real-world actions — sending emails, modifying documents, placing orders — the gap between a chatbot and an autonomous digital worker narrows rapidly. Anthropic’s computer use is the most visible step yet toward that world, and whether the industry is ready for the implications is a question nobody has fully answered.