<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/"><channel><title>Governance on Carles Abarca</title><link>https://carlesabarca.com/tags/governance/</link><description>Recent content in Governance on Carles Abarca</description><generator>Hugo -- gohugo.io</generator><language>en</language><copyright>© 2026 Carles Abarca</copyright><lastBuildDate>Thu, 09 Apr 2026 00:00:00 +0000</lastBuildDate><atom:link href="https://carlesabarca.com/tags/governance/index.xml" rel="self" type="application/rss+xml"/><item><title>Claude Mythos: the model Anthropic chose not to release</title><link>https://carlesabarca.com/posts/claude-mythos-unreleased-frontier-model/</link><pubDate>Thu, 09 Apr 2026 00:00:00 +0000</pubDate><guid>https://carlesabarca.com/posts/claude-mythos-unreleased-frontier-model/</guid><description>Anthropic has done something extraordinary: publish technical documentation about its most advanced model while refusing to deploy it broadly. Claude Mythos Preview may mark a turning point in the relationship between capability, security, and frontier model release.</description><content:encoded>&lt;blockquote&gt;&lt;p&gt;“Claude Mythos Preview is a general-purpose, unreleased frontier model.”&lt;br&gt;
— Anthropic, &lt;em&gt;Project Glasswing&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;&lt;p&gt;Anthropic has just made a decision that, until very recently, would have seemed almost unthinkable in the race for frontier models: &lt;strong&gt;publicly present a new-generation model while simultaneously deciding not to make it broadly available to the market&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;This is not a product delay. Nor is it a conventional beta program. What Anthropic has done with &lt;strong&gt;Claude Mythos Preview&lt;/strong&gt; is something else: publish part of the technical documentation, describe extraordinary capabilities—especially in offensive cybersecurity—and restrict access to a very limited circle of defensive actors under a specific initiative: &lt;strong&gt;Project Glasswing&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;The important question is not only what Mythos is. The important question is &lt;strong&gt;what it means that Anthropic has decided not to launch it like a normal model&lt;/strong&gt;.&lt;/p&gt;
&lt;hr&gt;

&lt;h2 class="relative group"&gt;The extraordinary part is not the model. It is the decision.
 &lt;div id="the-extraordinary-part-is-not-the-model-it-is-the-decision" class="anchor"&gt;&lt;/div&gt;
 
 &lt;span
 class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100 select-none"&gt;
 &lt;a class="text-primary-300 dark:text-neutral-700 !no-underline" href="#the-extraordinary-part-is-not-the-model-it-is-the-decision" aria-label="Anchor"&gt;#&lt;/a&gt;
 &lt;/span&gt;
 
&lt;/h2&gt;
&lt;p&gt;In the AI industry, a fairly clear logic had taken hold: if a lab trains a better model, sooner or later it turns it into a product. It may do so gradually, via APIs, waitlists, enterprise agreements, or usage restrictions. But the overall direction was unmistakable: &lt;strong&gt;more capability eventually meant more availability&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;With Mythos, Anthropic introduces a break.&lt;/p&gt;
&lt;p&gt;On the one hand, it presents the model as a new frontier of capability. On the other, it implicitly admits that &lt;strong&gt;this capability crosses a threshold that makes broad deployment irresponsible&lt;/strong&gt;.&lt;/p&gt;
&lt;blockquote&gt;&lt;p&gt;“We formed Project Glasswing because of capabilities we’ve observed in a new frontier model trained by Anthropic that we believe could reshape cybersecurity.”&lt;br&gt;
— Anthropic, &lt;em&gt;Project Glasswing&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;&lt;p&gt;That is not routine marketing language. It is a governance signal. Anthropic is saying that, in its judgment, the model is not just better: &lt;strong&gt;it is dangerously better in one specific dimension&lt;/strong&gt;.&lt;/p&gt;
&lt;hr&gt;

&lt;h2 class="relative group"&gt;What Anthropic claims about Claude Mythos Preview
 &lt;div id="what-anthropic-claims-about-claude-mythos-preview" class="anchor"&gt;&lt;/div&gt;
 
 &lt;span
 class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100 select-none"&gt;
 &lt;a class="text-primary-300 dark:text-neutral-700 !no-underline" href="#what-anthropic-claims-about-claude-mythos-preview" aria-label="Anchor"&gt;#&lt;/a&gt;
 &lt;/span&gt;
 
&lt;/h2&gt;
&lt;p&gt;The documentation Anthropic has published paints a picture that is difficult to ignore.&lt;/p&gt;
&lt;p&gt;In its Frontier Red Team technical post, the company argues that Mythos Preview:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;identifies and exploits &lt;strong&gt;zero-days&lt;/strong&gt; in real software,&lt;/li&gt;
&lt;li&gt;does so across &lt;strong&gt;every major operating system&lt;/strong&gt; and &lt;strong&gt;every major browser&lt;/strong&gt;,&lt;/li&gt;
&lt;li&gt;produces complex exploits, including multi-vulnerability chains,&lt;/li&gt;
&lt;li&gt;and represents a radical leap beyond previous Claude generations.&lt;/li&gt;
&lt;/ul&gt;
&lt;blockquote&gt;&lt;p&gt;“During our testing, we found that Mythos Preview is capable of identifying and then exploiting zero-day vulnerabilities in every major operating system and every major web browser when directed by a user to do so.”&lt;br&gt;
— Anthropic, &lt;em&gt;Claude Mythos Preview&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;&lt;p&gt;If this is correct, we are not looking at an incremental improvement. We are looking at a &lt;strong&gt;regime change&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;Anthropic goes further. It says internal engineers with no formal security training have asked the model to find a remote vulnerability overnight and woken up the next morning to a complete working exploit.&lt;/p&gt;
&lt;blockquote&gt;&lt;p&gt;“Engineers at Anthropic with no formal security training have asked Mythos Preview to find remote code execution vulnerabilities overnight, and woken up the following morning to a complete, working exploit.”&lt;br&gt;
— Anthropic, &lt;em&gt;Claude Mythos Preview&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;&lt;p&gt;That detail matters. It suggests not only that the model amplifies expert capability. It also suggests that it &lt;strong&gt;dramatically lowers the barrier to entry&lt;/strong&gt; for advanced offensive capability.&lt;/p&gt;
&lt;hr&gt;

&lt;h2 class="relative group"&gt;The leap beyond Opus 4.6
 &lt;div id="the-leap-beyond-opus-46" class="anchor"&gt;&lt;/div&gt;
 
 &lt;span
 class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100 select-none"&gt;
 &lt;a class="text-primary-300 dark:text-neutral-700 !no-underline" href="#the-leap-beyond-opus-46" aria-label="Anchor"&gt;#&lt;/a&gt;
 &lt;/span&gt;
 
&lt;/h2&gt;
&lt;p&gt;One of the most striking elements of the technical documentation is the comparison with earlier generations.&lt;/p&gt;
&lt;p&gt;Anthropic notes that, only a month earlier, its read on &lt;strong&gt;Opus 4.6&lt;/strong&gt; was that the model was much better at finding and fixing vulnerabilities than at exploiting them. In other words, it was still strong in defensive cybersecurity, but not especially effective at autonomous offensive work.&lt;/p&gt;
&lt;p&gt;With Mythos, that changes.&lt;/p&gt;
&lt;blockquote&gt;&lt;p&gt;“Opus 4.6 generally had a near-0% success rate at autonomous exploit development. But Mythos Preview is in a different league.”&lt;br&gt;
— Anthropic, &lt;em&gt;Claude Mythos Preview&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;&lt;p&gt;The company cites a benchmark involving Firefox vulnerabilities where Opus 4.6 only managed to convert findings into working exploits a handful of times, while Mythos Preview did so &lt;strong&gt;181 times&lt;/strong&gt;, with register control in &lt;strong&gt;29 additional cases&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;If those numbers hold, we are not talking about “a stronger Claude.” We are talking about &lt;strong&gt;a different order of capability&lt;/strong&gt;.&lt;/p&gt;
&lt;hr&gt;

&lt;h2 class="relative group"&gt;It was not trained “to hack”
 &lt;div id="it-was-not-trained-to-hack" class="anchor"&gt;&lt;/div&gt;
 
 &lt;span
 class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100 select-none"&gt;
 &lt;a class="text-primary-300 dark:text-neutral-700 !no-underline" href="#it-was-not-trained-to-hack" aria-label="Anchor"&gt;#&lt;/a&gt;
 &lt;/span&gt;
 
&lt;/h2&gt;
&lt;p&gt;This point is critical.&lt;/p&gt;
&lt;p&gt;Anthropic says it did &lt;strong&gt;not explicitly train Mythos Preview to develop these offensive capabilities&lt;/strong&gt;. According to the company, what we are seeing is an emergent consequence of broader improvements in:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;reasoning,&lt;/li&gt;
&lt;li&gt;autonomy,&lt;/li&gt;
&lt;li&gt;code work,&lt;/li&gt;
&lt;li&gt;and multi-step planning.&lt;/li&gt;
&lt;/ul&gt;
&lt;blockquote&gt;&lt;p&gt;“We did not explicitly train Mythos Preview to have these capabilities. Rather, they emerged as a downstream consequence of general improvements in code, reasoning, and autonomy.”&lt;br&gt;
— Anthropic, &lt;em&gt;Claude Mythos Preview&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;&lt;p&gt;That sentence deserves to be read carefully, because it points to something bigger than Mythos. It suggests that &lt;strong&gt;as generalist models improve at useful code work and agentic behavior, offensive capability stops being a separate specialty&lt;/strong&gt;. It appears as a natural side effect of general progress.&lt;/p&gt;
&lt;p&gt;That makes governance much harder. It is no longer enough to avoid training “a model for cyberattack.” The real issue is that &lt;strong&gt;a sufficiently capable general model can become a first-rate offensive tool even if that was never the explicit objective of training&lt;/strong&gt;.&lt;/p&gt;
&lt;hr&gt;

&lt;h2 class="relative group"&gt;So why not release it?
 &lt;div id="so-why-not-release-it" class="anchor"&gt;&lt;/div&gt;
 
 &lt;span
 class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100 select-none"&gt;
 &lt;a class="text-primary-300 dark:text-neutral-700 !no-underline" href="#so-why-not-release-it" aria-label="Anchor"&gt;#&lt;/a&gt;
 &lt;/span&gt;
 
&lt;/h2&gt;
&lt;p&gt;Anthropic frames the answer in terms of a &lt;strong&gt;dangerous transition window&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;Its thesis is that, in the long run, tools like this may benefit defenders more than attackers. But in the short run there is an obvious risk: offensive capability may diffuse faster than defensive capability can absorb it.&lt;/p&gt;
&lt;blockquote&gt;&lt;p&gt;“In the short term, this could be attackers, if frontier labs aren’t careful about how they release these models.”&lt;br&gt;
— Anthropic, &lt;em&gt;Claude Mythos Preview&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;&lt;p&gt;That is why there is no broad release. Instead, Anthropic created &lt;strong&gt;Project Glasswing&lt;/strong&gt;, an initiative involving partners such as AWS, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, the Linux Foundation, Microsoft, NVIDIA, and Palo Alto Networks, along with dozens of additional organizations.&lt;/p&gt;
&lt;blockquote&gt;&lt;p&gt;“By releasing this model initially to a limited group of critical industry partners and open source developers with Project Glasswing, we aim to enable defenders to begin securing the most important systems before models with similar capabilities become broadly available.”&lt;br&gt;
— Anthropic, &lt;em&gt;Claude Mythos Preview&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;&lt;p&gt;In other words: &lt;strong&gt;Anthropic is trying to turn a capability advantage into a temporary defensive advantage before the rest of the ecosystem catches up&lt;/strong&gt;.&lt;/p&gt;
&lt;hr&gt;

&lt;h2 class="relative group"&gt;What is really changing: publishing no longer means deploying
 &lt;div id="what-is-really-changing-publishing-no-longer-means-deploying" class="anchor"&gt;&lt;/div&gt;
 
 &lt;span
 class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100 select-none"&gt;
 &lt;a class="text-primary-300 dark:text-neutral-700 !no-underline" href="#what-is-really-changing-publishing-no-longer-means-deploying" aria-label="Anchor"&gt;#&lt;/a&gt;
 &lt;/span&gt;
 
&lt;/h2&gt;
&lt;p&gt;The most interesting thing about Mythos is not only the security argument. It is the precedent it sets.&lt;/p&gt;
&lt;p&gt;For years, many of us assumed that the most advanced model in a lab would also, sooner or later, be the one that ended up in the hands of customers, developers, or end users. With Mythos, that equivalence breaks.&lt;/p&gt;
&lt;p&gt;From now on, the most advanced model may:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;not be the main product,&lt;/li&gt;
&lt;li&gt;not be broadly offered via API,&lt;/li&gt;
&lt;li&gt;not reach the general market,&lt;/li&gt;
&lt;li&gt;and exist for some time in a kind of &lt;strong&gt;strategic quarantine&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;That changes a great deal.&lt;/p&gt;
&lt;p&gt;It changes how we think about competition between labs. It changes how we should read public announcements. And it changes the regulatory and geopolitical frame as well: &lt;strong&gt;if the most powerful models are no longer necessarily public, then the true frontier of capability may increasingly sit behind restricted-access programs, private agreements, and asymmetric deployments&lt;/strong&gt;.&lt;/p&gt;
&lt;hr&gt;

&lt;h2 class="relative group"&gt;But a critical reading is still necessary
 &lt;div id="but-a-critical-reading-is-still-necessary" class="anchor"&gt;&lt;/div&gt;
 
 &lt;span
 class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100 select-none"&gt;
 &lt;a class="text-primary-300 dark:text-neutral-700 !no-underline" href="#but-a-critical-reading-is-still-necessary" aria-label="Anchor"&gt;#&lt;/a&gt;
 &lt;/span&gt;
 
&lt;/h2&gt;
&lt;p&gt;That said, it would be a mistake to swallow the narrative whole.&lt;/p&gt;
&lt;p&gt;Anthropic is making extraordinary claims:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;thousands of high-severity vulnerabilities,&lt;/li&gt;
&lt;li&gt;zero-days in critical software,&lt;/li&gt;
&lt;li&gt;coverage across every major OS and browser,&lt;/li&gt;
&lt;li&gt;sophisticated exploits developed autonomously,&lt;/li&gt;
&lt;li&gt;and a security rationale strong enough to justify withholding the model.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The problem is that &lt;strong&gt;the public evidence is necessarily limited&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;Anthropic itself says that more than 99% of the vulnerabilities it has found are still unpatched and therefore cannot be disclosed. In addition, the risk document is presented in &lt;strong&gt;redacted&lt;/strong&gt; form.&lt;/p&gt;
&lt;blockquote&gt;&lt;p&gt;“Over 99% of the vulnerabilities we’ve found have not yet been patched, so it would be irresponsible for us to disclose details about them.”&lt;br&gt;
— Anthropic, &lt;em&gt;Claude Mythos Preview&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;&lt;p&gt;That is reasonable from the standpoint of responsible disclosure. But it also means that much of this story depends on &lt;strong&gt;trusting the lab’s own interpretation and framing&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;So yes: Anthropic’s decision may be sensible, even admirable, while still being wrapped in a corporate narrative that deserves methodological skepticism.&lt;/p&gt;
&lt;hr&gt;

&lt;h2 class="relative group"&gt;My read: Mythos may mark a before and after
 &lt;div id="my-read-mythos-may-mark-a-before-and-after" class="anchor"&gt;&lt;/div&gt;
 
 &lt;span
 class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100 select-none"&gt;
 &lt;a class="text-primary-300 dark:text-neutral-700 !no-underline" href="#my-read-mythos-may-mark-a-before-and-after" aria-label="Anchor"&gt;#&lt;/a&gt;
 &lt;/span&gt;
 
&lt;/h2&gt;
&lt;p&gt;My impression is that this episode may ultimately be remembered less for the model’s name than for the strategic signal it sends.&lt;/p&gt;
&lt;p&gt;Anthropic is not only saying “we trained something very powerful.” It is saying something more uncomfortable:&lt;/p&gt;
&lt;blockquote&gt;&lt;p&gt;&lt;strong&gt;we have crossed a capability frontier where responsible behavior no longer automatically means publication&lt;/strong&gt;.&lt;/p&gt;
&lt;/blockquote&gt;&lt;p&gt;If that thesis holds, Mythos will matter for three reasons.&lt;/p&gt;

&lt;h3 class="relative group"&gt;1. Because it normalizes partial retention of frontier models
 &lt;div id="1-because-it-normalizes-partial-retention-of-frontier-models" class="anchor"&gt;&lt;/div&gt;
 
 &lt;span
 class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100 select-none"&gt;
 &lt;a class="text-primary-300 dark:text-neutral-700 !no-underline" href="#1-because-it-normalizes-partial-retention-of-frontier-models" aria-label="Anchor"&gt;#&lt;/a&gt;
 &lt;/span&gt;
 
&lt;/h3&gt;
&lt;p&gt;Not as an anecdotal exception, but as a legitimate governance tool.&lt;/p&gt;

&lt;h3 class="relative group"&gt;2. Because it shifts the debate from “what can the model do?” to “who should be allowed to use it, and when?”
 &lt;div id="2-because-it-shifts-the-debate-from-what-can-the-model-do-to-who-should-be-allowed-to-use-it-and-when" class="anchor"&gt;&lt;/div&gt;
 
 &lt;span
 class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100 select-none"&gt;
 &lt;a class="text-primary-300 dark:text-neutral-700 !no-underline" href="#2-because-it-shifts-the-debate-from-what-can-the-model-do-to-who-should-be-allowed-to-use-it-and-when" aria-label="Anchor"&gt;#&lt;/a&gt;
 &lt;/span&gt;
 
&lt;/h3&gt;
&lt;p&gt;That is a fundamental change.&lt;/p&gt;

&lt;h3 class="relative group"&gt;3. Because it suggests that the real frontier of capability may already sit several steps ahead of what we see in product
 &lt;div id="3-because-it-suggests-that-the-real-frontier-of-capability-may-already-sit-several-steps-ahead-of-what-we-see-in-product" class="anchor"&gt;&lt;/div&gt;
 
 &lt;span
 class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100 select-none"&gt;
 &lt;a class="text-primary-300 dark:text-neutral-700 !no-underline" href="#3-because-it-suggests-that-the-real-frontier-of-capability-may-already-sit-several-steps-ahead-of-what-we-see-in-product" aria-label="Anchor"&gt;#&lt;/a&gt;
 &lt;/span&gt;
 
&lt;/h3&gt;
&lt;p&gt;And that has major implications for strategy, technology policy, and security.&lt;/p&gt;
&lt;hr&gt;

&lt;h2 class="relative group"&gt;The uncomfortable conclusion
 &lt;div id="the-uncomfortable-conclusion" class="anchor"&gt;&lt;/div&gt;
 
 &lt;span
 class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100 select-none"&gt;
 &lt;a class="text-primary-300 dark:text-neutral-700 !no-underline" href="#the-uncomfortable-conclusion" aria-label="Anchor"&gt;#&lt;/a&gt;
 &lt;/span&gt;
 
&lt;/h2&gt;
&lt;p&gt;For years, the dominant AI narrative assumed that technical progress would eventually democratize access to ever more powerful capabilities.&lt;/p&gt;
&lt;p&gt;Claude Mythos introduces a different possibility: that some capabilities are so sensitive that technical progress will not lead to openness, but to &lt;strong&gt;containment&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;Not because the model failed. Precisely because it worked too well.&lt;/p&gt;
&lt;blockquote&gt;&lt;p&gt;“Claude Mythos Preview reveals a stark fact: AI models have reached a level of coding capability where they can surpass all but the most skilled humans at finding and exploiting software vulnerabilities.”&lt;br&gt;
— Anthropic, &lt;em&gt;Project Glasswing&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;&lt;p&gt;If Anthropic is right, this is not simply another model launch. It is the moment when a frontier lab explicitly decided that &lt;strong&gt;its most advanced system should not behave like a normal product&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;And in this industry, that is a much bigger story than any benchmark.&lt;/p&gt;
&lt;hr&gt;

&lt;h2 class="relative group"&gt;Main sources
 &lt;div id="main-sources" class="anchor"&gt;&lt;/div&gt;
 
 &lt;span
 class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100 select-none"&gt;
 &lt;a class="text-primary-300 dark:text-neutral-700 !no-underline" href="#main-sources" aria-label="Anchor"&gt;#&lt;/a&gt;
 &lt;/span&gt;
 
&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Anthropic — &lt;em&gt;Project Glasswing&lt;/em&gt;&lt;br&gt;
&lt;a href="https://www.anthropic.com/glasswing" target="_blank" rel="noreferrer"&gt;https://www.anthropic.com/glasswing&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Anthropic Frontier Red Team — &lt;em&gt;Claude Mythos Preview&lt;/em&gt;&lt;br&gt;
&lt;a href="https://red.anthropic.com/2026/mythos-preview/" target="_blank" rel="noreferrer"&gt;https://red.anthropic.com/2026/mythos-preview/&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Anthropic — &lt;em&gt;Alignment Risk Update: Claude Mythos Preview (Redacted)&lt;/em&gt;&lt;br&gt;
&lt;a href="https://www.anthropic.com/claude-mythos-preview-risk-report" target="_blank" rel="noreferrer"&gt;https://www.anthropic.com/claude-mythos-preview-risk-report&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;</content:encoded><media:content xmlns:media="http://search.yahoo.com/mrss/" url="https://carlesabarca.com/posts/claude-mythos-unreleased-frontier-model/featured.svg"/></item><item><title>Most Companies Don't Have an AI Problem. They Have an Organization Problem</title><link>https://carlesabarca.com/posts/companies-dont-have-ai-problem/</link><pubDate>Wed, 14 Jan 2026 00:00:00 +0000</pubDate><guid>https://carlesabarca.com/posts/companies-dont-have-ai-problem/</guid><description>Between 70% and 80% of AI initiatives fail. The problem is not technology: it is data, processes, and organizational culture.</description><content:encoded>&lt;p&gt;Everyone talks about models.
Everyone talks about agents.
Everyone talks about copilots.&lt;/p&gt;
&lt;p&gt;But when you analyze what actually happens inside companies, an uncomfortable truth emerges: &lt;strong&gt;AI is not failing; organizations are.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;The data is consistent across multiple studies:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Between 70% and 80% of AI and advanced analytics initiatives fail.&lt;/li&gt;
&lt;li&gt;Only 23% of companies derive real, sustained value from AI.&lt;/li&gt;
&lt;li&gt;81% struggle to bring AI to production.&lt;/li&gt;
&lt;li&gt;70% of digital transformations fail due to culture and organization.&lt;/li&gt;
&lt;li&gt;The main blockers for AI are data, skills, and organizational complexity.&lt;/li&gt;
&lt;li&gt;Additionally, 63% of companies do not have AI-Ready data, putting their initiatives at risk.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Yet many executives continue to say &amp;ldquo;the technology is not ready.&amp;rdquo;&lt;/p&gt;

&lt;h2 class="relative group"&gt;The Five Uncomfortable Truths
 &lt;div id="the-five-uncomfortable-truths" class="anchor"&gt;&lt;/div&gt;
 
 &lt;span
 class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100 select-none"&gt;
 &lt;a class="text-primary-300 dark:text-neutral-700 !no-underline" href="#the-five-uncomfortable-truths" aria-label="Anchor"&gt;#&lt;/a&gt;
 &lt;/span&gt;
 
&lt;/h2&gt;

&lt;h3 class="relative group"&gt;1. Non-Existent Governance
 &lt;div id="1-non-existent-governance" class="anchor"&gt;&lt;/div&gt;
 
 &lt;span
 class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100 select-none"&gt;
 &lt;a class="text-primary-300 dark:text-neutral-700 !no-underline" href="#1-non-existent-governance" aria-label="Anchor"&gt;#&lt;/a&gt;
 &lt;/span&gt;
 
&lt;/h3&gt;
&lt;p&gt;Models without owners, without policies, without controls, and that do not scale.&lt;/p&gt;

&lt;h3 class="relative group"&gt;2. Data in a Wild State
 &lt;div id="2-data-in-a-wild-state" class="anchor"&gt;&lt;/div&gt;
 
 &lt;span
 class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100 select-none"&gt;
 &lt;a class="text-primary-300 dark:text-neutral-700 !no-underline" href="#2-data-in-a-wild-state" aria-label="Anchor"&gt;#&lt;/a&gt;
 &lt;/span&gt;
 
&lt;/h3&gt;
&lt;p&gt;Silos, duplicates, poor quality, lack of lineage. AI amplifies disorganization.&lt;/p&gt;

&lt;h3 class="relative group"&gt;3. Invisible or Inconsistent Processes
 &lt;div id="3-invisible-or-inconsistent-processes" class="anchor"&gt;&lt;/div&gt;
 
 &lt;span
 class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100 select-none"&gt;
 &lt;a class="text-primary-300 dark:text-neutral-700 !no-underline" href="#3-invisible-or-inconsistent-processes" aria-label="Anchor"&gt;#&lt;/a&gt;
 &lt;/span&gt;
 
&lt;/h3&gt;
&lt;p&gt;You cannot automate what is not defined or integrate AI into workflows that do not exist.&lt;/p&gt;

&lt;h3 class="relative group"&gt;4. Unbalanced Teams
 &lt;div id="4-unbalanced-teams" class="anchor"&gt;&lt;/div&gt;
 
 &lt;span
 class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100 select-none"&gt;
 &lt;a class="text-primary-300 dark:text-neutral-700 !no-underline" href="#4-unbalanced-teams" aria-label="Anchor"&gt;#&lt;/a&gt;
 &lt;/span&gt;
 
&lt;/h3&gt;
&lt;p&gt;Lots of enthusiasm, little engineering. Many pilots, zero operations.&lt;/p&gt;

&lt;h3 class="relative group"&gt;5. Strategies Built Backwards
 &lt;div id="5-strategies-built-backwards" class="anchor"&gt;&lt;/div&gt;
 
 &lt;span
 class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100 select-none"&gt;
 &lt;a class="text-primary-300 dark:text-neutral-700 !no-underline" href="#5-strategies-built-backwards" aria-label="Anchor"&gt;#&lt;/a&gt;
 &lt;/span&gt;
 
&lt;/h3&gt;
&lt;p&gt;Starting with the model instead of the business case. Celebrating the prototype and burying it a month later.&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;AI is not going to replace those who work well. But it will expose organizations that work poorly.&lt;/p&gt;
&lt;p&gt;2026 will be the year when companies must confront their operational maturity: data, processes, governance, and culture.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Because AI works. What does not work is implementing it without organization.&lt;/strong&gt;&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;&lt;em&gt;Sources: Harvard Business Review, MIT Sloan Management Review, O&amp;rsquo;Reilly / VentureBeat, Boston Consulting Group, IBM Global AI Adoption Index, Gartner.&lt;/em&gt;&lt;/p&gt;</content:encoded><media:content xmlns:media="http://search.yahoo.com/mrss/" url="https://carlesabarca.com/posts/companies-dont-have-ai-problem/featured.png"/></item></channel></rss>