Anthropic, the company behind the Claude family of AI models, has been rocked by a significant security lapse that exposed nearly 3,000 internal digital assets — including details of an unreleased model internally called ‘Mythos’ that the company describes as its most capable AI system yet trained.
The incident, first reported by Fortune, occurred when Anthropic’s content management system (CMS) was left publicly accessible without authentication. Cybersecurity researcher Alexandre Pauwels at the University of Cambridge discovered the cache while investigating the company’s public-facing infrastructure. The unsecured data included draft blog posts, internal images, PDFs, and — most significantly — details of upcoming product announcements.
Among the most sensitive discoveries: documentation describing Anthropic’s next flagship model. According to internal materials reviewed by Fortune, the model represents a ‘step change’ in capabilities, with ‘significantly better performance in reasoning, coding, and cybersecurity’ than any previous Anthropic model. The company has since confirmed it is developing and testing the model with early access customers.
Also exposed: details of an invite-only retreat for CEOs of large European companies, scheduled to take place in the United Kingdom, with Anthropic CEO Dario Amodei listed as a featured attendee.
‘Human Error,’ Not AI Failure
Anthropic was quick to attribute the breach to old-fashioned human misconfiguration rather than any failure of its own AI tools. ‘An issue with one of our external CMS tools led to draft content being accessible,’ a company spokesperson told Fortune. ‘These materials were early drafts of content considered for publication and did not involve our core infrastructure, AI systems, customer data, or security architecture.’
The spokesperson confirmed the issue was ‘unrelated to Claude, Cowork, or any Anthropic AI tools.’ The company moved quickly to restrict access after being notified by Fortune on Thursday.
The Anthropic leak joins a growing list of high-profile tech companies that have inadvertently exposed pre-release material through misconfigured systems. Apple has suffered similar fates twice — once in 2018 when upcoming iPhone names appeared in a publicly accessible sitemap file hours before launch, and again in late 2025 when a developer found debugging files left active on the redesigned App Store. Google, Tesla, Epic Games, and Nintendo have all experienced comparable incidents.
AI Tools Lowering the Bar for Discovery
What makes the Anthropic case particularly noteworthy is the role AI tools now play in discovering such exposures. Cybersecurity researchers note that AI-powered crawling and pattern-detection tools have dramatically lowered the effort required to identify publicly accessible data caches. Scripts can scan entire datasets, correlate file naming conventions, and flag anomalies that a human analyst might miss.
Anthropic’s own Claude Code — a coding agent the company has promoted as transforming software development — exemplifies this double-edged dynamic. While Anthropic says Claude Code was not involved in this particular incident, the broader point stands: the same AI tools that accelerate legitimate development also accelerate the discovery of exposed data.
Mythos: What We Know
Details about Mythos remain limited, but the internal documentation paints a picture of a substantially more capable system than Claude 3.5 Sonnet, which currently powers Claude’s most advanced deployments. The model is described as excelling particularly in reasoning, coding tasks, and cybersecurity applications — three areas where Anthropic has been competing aggressively against OpenAI, Google, and xAI.
Anthropic has not announced a release date for Mythos, and it remains unclear whether the model will launch under that internal codename or with a different product name. The company is expected to make an official announcement in the coming months, likely at one of the CEO events that the leaked documentation referenced.
The incident is likely to fuel ongoing debates about the security practices of AI companies, particularly as they move faster and faster to ship products and announcements. Critics have argued that the rush to market in the AI sector has come at the expense of basic security hygiene — a charge Anthropic, for all its stated commitment to safety, now has to reckon with publicly.
For now, the company is dealing with the fallout of what it called ‘classic illegal First Amendment retaliation’ in a separate court case involving the Department of Defense — a case that ended with a judge blocking the Trump administration’s attempt to designate Anthropic as a military supply-chain risk.
One thing is clear: the Mythos leak has pulled back the curtain on one of the most anticipated AI releases of the year — and demonstrated, once again, that the biggest threat to AI companies may be the humans who configure their CMSes.