Anthropic on Monday gave Claude a capability that moves it out of the chat window and onto your desktop: the ability to control a Mac by clicking buttons, opening applications, typing into fields, and navigating software on the user’s behalf. It is available immediately as a research preview for paying subscribers 鈥?Claude Pro at $17 per month and Max at $100 or $200 per month 鈥?but only on macOS for now.
The feature arrives inside two products: Claude Cowork, the company’s agentic productivity tool, and Claude Code, its developer-focused command-line agent. Both now include what Anthropic calls computer use 鈥?the ability to take screenshots of the user’s desktop, interpret what is on screen, and interact with applications the way a human operator would. A companion feature called Dispatch, introduced last week for Cowork and now extended to Claude Code, turns your iPhone into a remote control for desktop automation: text Claude instructions from anywhere, and it executes them on the Mac while you are away.
How It Works: A Hierarchy of Approaches
Anthropic has built computer use with a layered priority system. When a user assigns Claude a task, it first checks whether a direct connector exists 鈥?integrations with Gmail, Google Drive, Slack, or Google Calendar. These are the fastest and most reliable paths. If no connector is available, Claude falls back to navigating Chrome via Anthropic’s browser extension. Only as a last resort does it interact directly with the screen, clicking, typing, scrolling, and opening applications.
This hierarchy matters because screen-level interaction is the most flexible mode 鈥?it can theoretically work with any application 鈥?but also the slowest and most fragile. As Anthropic’s own documentation puts it, pulling messages through a Slack connection takes seconds, but navigating Slack through your screen takes much longer and is more error-prone.
When Claude does operate at the screen level, it takes screenshots to understand what it is looking at. That means it can see anything visible on the screen, including personal data, sensitive documents, or private information. Anthropic trains Claude to avoid stock trading, sensitive data input, and facial image gathering, but the company is candid: these guardrails are part of how Claude is trained and instructed, but they are not absolute.
The Dispatch Feature: Your iPhone as an AI Remote
The more strategically interesting feature may be Dispatch. A user pairs their mobile device with their Mac by scanning a QR code. From that point, they can text Claude instructions from anywhere 鈥?on a commute, on a beach, anywhere 鈥?and Claude executes those instructions on the desktop, which must remain awake and running the Claude app. The results come back when Claude finishes.
Anthropic’s examples range from mundane to ambitious: having Claude check email every morning and summarize it, pulling weekly metrics into a report template, organizing a cluttered Downloads folder, compiling a competitive analysis from local files and connected tools. Scheduled tasks let users set a cadence once 鈥?every Friday, every morning 鈥?and let Claude handle the rest without further prompting.
One user on social media captured the broader implication: combine this with schedule that just dropped and you have basically got a background worker that can interact with any app on a cron job. That is not an AI assistant anymore, that is infrastructure.
Early Testing: About Half the Time Works
Anthropic is calling this a research preview for a reason. Early hands-on evaluations offer a measured assessment. John Voorhees of MacStories published a detailed test of Dispatch on announcement day. Claude successfully located a specific screenshot on his Mac, summarized a recent Notion note, added a URL to Notion, summarized his most recent email, and recalled a screenshot from earlier in the session. But it failed to open the Shortcuts app, send a screenshot via iMessage, list unfinished Todoist tasks due to an authorization error, list Terminal sessions, or fetch a URL from Safari using AppleScript.
Voorhees verdict: about a 50/50 shot whether what you try will work. He called it a step in the right direction but noted it is not good enough to rely on when you are away from your desk.
On GitHub, users are already surfacing technical issues. One bug report describes a scenario where Claude Code Read tool attempts to process multiple large PDF files in a single turn without checking whether the combined payload exceeds the 20MB API limit, causing the request to fail outright.
The Competitive Landscape: OpenClaw, NemoClaw, and the Startup Swarm
Anthropic is not entering an empty market. The release lands in a landscape rapidly reshaped by the rise of OpenClaw, the open-source framework that enables AI models to autonomously control computers and interact with tools. OpenClaw went viral earlier this year and spawned an entire ecosystem of derivative tools 鈥?community members call them claws 鈥?that turned autonomous computer control from a research curiosity into a product category almost overnight. Nvidia entered the fray last week with NemoClaw, its own framework designed to simplify OpenClaw deployment with added security controls.
Smaller startups are pushing into the space too. Coasty offers both a desktop app and browser-based AI agent for Mac and Windows, marketing itself as providing full browser, desktop, and terminal automation. One user directly pitched it in response to Anthropic announcement, claiming it offers much better user experience and more accurate results.
The competition extends beyond consumer features. Reuters reported that OpenAI is actively courting private equity firms in what it described as an enterprise turf war with Anthropic. Both companies are locked in an escalating battle for enterprise customers, and the ability to ship agents that actually operate within a company existing software stack is increasingly the differentiator.
Security: The Hard Problem
Computer use runs outside the virtual machine that Cowork normally uses for file operations. That means Claude is interacting with the user actual desktop and applications, not a sandbox. The implications are significant: a misclick, a misunderstood instruction, or a prompt injection attack could have real consequences on a live system.
Anthropic has built several layers of defense. Claude requests permission before accessing each application. Sensitive apps 鈥?investment platforms, cryptocurrency tools 鈥?are blocked by default. Users can maintain a blocklist. The system scans for prompt injection during sessions. But the company is remarkably direct about the limits: Computer use is still early compared to Claude ability to code or interact with text. Claude can make mistakes, and while we continue to improve our safeguards, threats are constantly evolving.
The help center explicitly advises against using computer use for financial accounts, legal documents, medical information processing, or any app containing other people personal information. It also recommends against using Cowork for HIPAA, FedRAMP, or FSI-regulated workloads.
For enterprise customers, there is an additional wrinkle: Cowork conversation history is stored locally on the device, not on Anthropic servers. Enterprise features like audit logs, compliance APIs, and data exports do not currently capture Cowork activity. Organizations subject to regulatory oversight have no centralized record of what Claude did on a user machine.
The Broader Picture
What Anthropic shipped this week is not a polished product. It is a research preview of a capability that represents a genuine shift in what AI assistants can do. The line between AI that talks about doing things and AI that actually does them is blurring fast. Whether the reliability, security, and compliance questions can be resolved in time to satisfy enterprise buyers is an open question 鈥?but the direction is clear.
The era of AI as a passive conversational partner is ending. The era of AI as an active operator is beginning. Claude Mac control is one of the first consumer-facing glimpses of that future. It is rough around the edges, limited in scope, and hedged with caveats. But it works 鈥?about half the time, in the best-case scenarios. And the pace of improvement suggests that about half the time may not be the verdict for long.