Anthropic didn't plan to announce Claude Mythos on 26 March 2026. The announcement came via an unsecured data cache, discovered by security researchers and reported to Fortune. The incident itself was embarrassing for a company built on safety and responsibility.
But the leaked details revealed something more significant than the embarrassment: they showed us where frontier AI is heading, and why some of the world's most sophisticated defenders are worried.
The leak and the model
Anthropic's content management system left approximately 3,000 unpublished assets in a publicly searchable data store. Among them was a draft blog post describing Claude Mythos, described internally as "by far the most powerful AI model we've ever developed."
The leak was human error in configuration, Anthropic later acknowledged. But the damage was done, and the market reacted immediately. Cybersecurity stocks crashed on the news.
The draft materials revealed that Mythos (also referred to internally as Capybara) represents a new tier above Claude Opus 4.6, Anthropic's current flagship model.
"Compared to our previous best model, Claude Opus 4.6, Capybara gets dramatically higher scores on tests of software coding, academic reasoning, and cybersecurity, among others," the leaked document stated.
This is more than an incremental improvement. It's a step change in capability that redefines what AI can do in highly technical domains. The market reaction reflects a genuine fear that these new capabilities could upend the current balance of power between cyber defenders and attackers.
Sign in to read the full story
To Keep Reading Join Project Flux Pro
Get weekly expert AMAs, exclusive AI tools, deep-dive podcasts, and join a community of project professionals mastering AI in project delivery.
Join ProWhat You'll Get::
- Weekly Live AMA & Expert Sessions
- Private Pro Community Access
- Exclusive Podcast & Deep Research
- AI Tools & Templates Library
