In a significant security incident that has sent ripples through the AI industry, Anthropic inadvertently revealed details of an upcoming AI model that the company internally calls “Mythos” alongside plans for an exclusive CEO event through an unsecured content management system.
Cybersecurity researcher Alexandre Pauwels, working with Fortune, discovered approximately 3,000 assets linked to Anthropic blog that had been stored in a publicly accessible data cache. The exposed materials included unpublished blog posts, internal documents, images, and PDFs many of which were clearly not meant for public consumption.
What Was Leaked?
Among the most significant revelations was documentation of a new AI model that Anthropic described internally as representing a “step change” in capabilities. The model, now known by the name “Mythos,” appears to be the most capable AI system Anthropic has ever trained according to the leaked materials.
The security lapse also exposed details about an exclusive CEO event that Anthropic had been planning, suggesting the company intended to make a major announcement related to the new model. Images, presentation materials, and internal communications regarding the event were all found in the exposed data cache.
Anthropic acknowledged the breach after being contacted by Fortune, stating that the issue stemmed from “human error in the CMS configuration.” The company moved quickly to secure the exposed data after being notified of the vulnerability.
The Root Cause: A CMS Configuration Error
The incident appears to stem from how Anthropic content management system handled digital assets. All materials uploaded to the company central data store, including logos, graphics, research papers, and draft content, were public by default unless explicitly set to private.
Several of the company assets also had public browser addresses, meaning anyone with technical knowledge could access the files directly. “These materials were early drafts of content considered for publication and did not involve our core infrastructure, AI systems, customer data, or security architecture,” an Anthropic spokesperson told Fortune.
Industry Implications
The Anthropic breach comes at a time when AI companies are under intense scrutiny over security practices. The incident highlights the complex challenge that AI labs face in managing the vast amounts of sensitive data associated with frontier model development.
Despite Anthropic attempts to downplay the significance, the incident has raised questions about how AI companies handle pre-announcement planning and the security of their internal systems. With the AI industry racing to develop increasingly capable systems, the protection of sensitive development information has become a critical competitive and security concern.
What Comes Next
Following the disclosure, Anthropic took immediate steps to restrict access to the exposed data. The company has not confirmed whether the Mythos model will be officially announced or when it might be released to the public.
The incident serves as a stark reminder that even the most advanced AI companies remain vulnerable to basic security configuration errors. As the AI arms race intensifies, the pressure to announce breakthroughs quickly must be balanced against the need for rigorous security practices.