Three days ago, a misconfigured CMS at Anthropic left roughly 3,000 internal assets publicly accessible. Among them: a draft blog post announcing their next-generation AI model. The name varies between two leaked drafts — “Mythos” and “Capybara” — but what matters isn’t the name. What matters is what it can do.
And what it can do should make anyone in technology leadership stop and think very carefully.
What Leaked#
On March 26, security researchers Roy Paz (LayerX Security) and Alexandre Pauwels (University of Cambridge) discovered the exposed documents. Anthropic acknowledged the leak as “human error” and confirmed the model is real.
Here’s what we know:
Claude Mythos is not Claude Opus 4.7. It’s not an incremental update. It’s a new tier above Opus — Anthropic’s own words: “a new name for a new tier of model: larger and more intelligent than our Opus models, which were, until now, our most powerful.” Reports suggest roughly 10 trillion parameters, a 5-10x jump from previous frontier models.
Training is complete. Select customers are already testing it.
Why Cybersecurity Stocks Crashed#
The morning after the leak, the market’s reaction was swift and brutal:
- iShares Cybersecurity ETF: -4.5%
- CrowdStrike, Palo Alto Networks, Zscaler, SentinelOne: -6% each
- Tenable: -9%
- Bitcoin dropped to $66,000
Why? Because the leaked draft describes Mythos as “currently far ahead of any other AI model in cyber capabilities.” It can discover and exploit software vulnerabilities at speeds that — Anthropic’s own assessment — “far outpace human defenders.”
Read that again. The company that built it is telling you that human cybersecurity teams can’t keep up with it.
This isn’t hypothetical. Anthropic already caught a Chinese state-sponsored group using Claude Code to infiltrate approximately 30 organizations — tech companies, financial institutions, government agencies — before detection. Mythos reportedly makes that look like child’s play.
Stifel analyst Adam Borg put it plainly: “Mythos is an order of magnitude more powerful, and compute-intensive, than any other frontier model on the market.”
The Rollout Strategy Tells You Everything#
Anthropic’s deployment approach is perhaps the most revealing signal:
- First access: Not developers. Not enterprises. Cybersecurity organizations — “giving them a head start in improving the robustness of their codebases against the impending wave of AI-driven exploits.”
- No public launch date. They’re explicitly delaying broad release.
- Cost problem acknowledged. Anthropic says it’s “very expensive to serve” and they need to make it “much more efficient before any general release.”
When a company builds the most powerful AI model in the world and its first instinct is to hand it to defenders before attackers can get it — that’s not a product launch. That’s a controlled disclosure.
What Mythos Means Beyond Cybersecurity#
Let me be direct about what I think this represents.
Mythos posts “dramatically higher scores” than Opus 4.6 on coding and academic reasoning benchmarks. Opus 4.6 already led SWE-bench Verified at 80.8% and Terminal-Bench 2.0 at 65.4%. Whatever “dramatically higher” means, we’re talking about a model that can code better than most professional developers and reason through complex problems at a level that was science fiction five years ago.
But the cybersecurity capability is the real wake-up call, because vulnerability discovery requires something qualitatively different from text generation or code completion. It requires:
- Deep multi-step reasoning — chaining logical inferences across complex systems
- Adversarial creativity — finding attack vectors that weren’t designed or anticipated
- Autonomous execution — not just identifying a vulnerability but actively exploiting it
When a model can do all three at superhuman speed in a domain as complex as cybersecurity, the implications extend to every field that involves complex reasoning under uncertainty. Law. Medicine. Scientific research. Strategic planning. Finance.
The AGI Question (Which Is the Wrong Question)#
Is Mythos AGI? No. It doesn’t learn new tasks from minimal examples the way humans can. It has no persistent memory, no self-improvement loop, no autonomous goal-setting.
But here’s what I think matters more: we may be past the point where the AGI label matters practically.
A model that can autonomously find and exploit zero-day vulnerabilities — something that previously required teams of elite human researchers — changes the game regardless of whether we call it “general” intelligence. Narrow superintelligence in high-stakes domains is more immediately consequential than theoretical AGI.
The fact that Anthropic itself is alarmed enough to delay general release and prioritize defensive deployment tells you where we are on the capability curve.
The Competitive Context Makes It Worse#
Mythos doesn’t exist in isolation:
- OpenAI has finished pretraining a new model codenamed “Spud” — expected within weeks.
- Google DeepMind just launched Gemini 3.1 for real-time multimodal processing.
- Both Anthropic and OpenAI are timing major releases ahead of planned IPOs later in 2026.
This is an arms race with IPO pressure. The incentives to push capability boundaries are enormous and increasing. The incentives for caution are… well, we just saw how Anthropic’s caution played out. A CMS misconfiguration, and the whole world knows.
What This Means for Institutions#
For universities, for governments, for any organization making decisions about AI strategy:
The planning horizon just compressed. If you were thinking about AI governance frameworks as a 2027-2028 initiative, think again. Models with superhuman capabilities in specific domains are here now, not in a comfortable future.
Cybersecurity is no longer optional. It’s existential. Every institution needs to assume that AI-powered attacks will become the norm, not the exception. The defenders need AI too — and they need it first.
The talent equation is shifting. When a model can outperform human cybersecurity experts, the value isn’t in the technical execution — it’s in the judgment about when and how to deploy these capabilities. We need people who understand both the technology and its implications.
I keep coming back to the same conclusion I wrote in my previous post on AEO: digital transformation in 2026 means preparing institutions for a world where AI systems are colleagues, not tools. Mythos just made that statement feel uncomfortably literal.
Jensen Huang said AGI has arrived. He was wrong about the definition but right about the urgency. Whether we call it AGI or narrow superintelligence or just “really powerful AI” — the systems are here, they’re real, and the time to prepare was yesterday.
Carles Abarca is Vice President of Digital Transformation at Tecnológico de Monterrey.

