Skip to main content
  1. Blog/

Anthropic vs. the Pentagon: When AI Ethics Collides with Military Power

·4 mins· loading
Carles Abarca
Author
Carles Abarca
Writing about AI, digital transformation, and the forces reshaping technology.

Last week we witnessed something unprecedented in the history of artificial intelligence: an AI company standing up to the Pentagon and saying “no.”

Anthropic, creator of Claude — the only AI model currently authorized on the U.S. federal government’s classified systems — rejected the final terms of a $200 million contract with the Department of Defense. The consequences were immediate and brutal.

The Red Line
#

The conflict boiled down to two non-negotiable points for Anthropic:

  1. Mass surveillance of American citizens. The Pentagon wanted to use Claude to analyze bulk-collected data: search histories, GPS movements, credit card transactions, even the questions you ask your favorite chatbot. All cross-referenced to build profiles.

  2. Autonomous weapons. Systems that select and engage targets without a human making the final call. The 2026 military budget allocates $13.4 billion to these weapons alone.

Anthropic didn’t argue that such weapons shouldn’t exist. In fact, they offered to work directly with the Pentagon to improve their reliability. But they determined that current AI models aren’t reliable enough to make lethal decisions autonomously. The risk of indiscriminate fire, civilian casualties, or even harm to American troops was, in their analysis, too high.

The False Solution of “Cloud vs. Edge”
#

During negotiations, a proposal emerged: keep Anthropic’s AI in the cloud, out of the weapons themselves. The models would synthesize intelligence before an operation but wouldn’t make kill decisions. The AI’s hands would stay clean.

Anthropic rejected this with a devastating technical argument: in modern military AI architectures, the distinction between cloud and edge no longer exists. Drones operate through mesh networks connected to data centers. The Pentagon actively works to push computing closer to the battlefield. If a model in an AWS server in Virginia is making combat decisions, ethically there’s no difference from it being inside the drone.

The Response: The Hammer
#

When Anthropic held its ground, the response was swift:

  • Trump ordered all federal agencies to cease using Anthropic’s technology.
  • Pete Hegseth (Defense Secretary) designated Anthropic a supply chain risk to national security, barring any military contractor from doing business with the company.
  • OpenAI announced a Pentagon deal just hours later.

The message was clear: play by our rules, or we destroy you.

What Sam Altman Didn’t Explain
#

Here’s the most troubling part. Days before the collapse, Sam Altman had publicly declared that OpenAI would also refuse to let its models be used in autonomous weapons. Solidarity with Anthropic.

But while making those statements, he was already negotiating with the Pentagon. And he closed the deal hours after Anthropic’s fall, ensuring his AI would only be deployed “in the cloud” — exactly the solution Anthropic dismissed as insufficient.

Nearly 100 OpenAI employees signed an open letter supporting the same red lines as Anthropic. Altman will have to explain on Monday why what Anthropic rejected on principle, he accepted for business.

What’s Really at Stake
#

This crisis transcends a contract. It reveals three fundamental fractures:

1. AI as a geopolitical weapon. AI technology is no longer just a commercial product. It’s a strategic military asset, and governments are willing to use their full power to control it.

2. The illusion of self-regulation. Anthropic tried to set ethical limits from within. The response was a national security risk designation. What company will dare say “no” after this?

3. The gap between words and action. OpenAI talked principles and signed a check. It’s not the first time, and the industry should take note.

My Take
#

I’ve spent over 20 years in technology, and I’ve seen many inflection points. This is one of them.

Anthropic did something extraordinarily rare in the tech industry: sacrifice $200 million and their federal government access for an ethical position. We can debate whether it was a smart business decision, but we can’t deny it was brave.

What concerns me isn’t Anthropic — they’ll survive. What concerns me is the precedent. If an AI company that puts ethical limits on its technology can be designated a “national security risk,” we’re building a system where the only option is blind obedience.

And obedient AI without restrictions, in the hands of unchecked power, is exactly the scenario that every AI safety researcher has been warning about for years.

The question is no longer whether AI will transform warfare. The question is who decides the limits.


What do you think of Anthropic’s stance? Principles or naivety? I’d love to hear your perspective on LinkedIn or X.