OpenAI Took the Pentagon Deal Anthropic Refused. 2.5 Million Users Are Quitting ChatGPT. Claude Hit #1.

Abhishek Gautam··7 min read

Quick summary

Anthropic was blacklisted for refusing autonomous weapons access. OpenAI signed the same deal within hours. The backlash broke records — and sent users to Claude.

The sequence of events this week reads like a case study in corporate ethics under pressure — and it has reshuffled the AI landscape faster than any benchmark or product launch.

What happened, in order

On February 27, 2026, the Trump administration designated Anthropic a "supply-chain risk to national security." The reason, according to reporting from Fortune and CNBC: Anthropic had refused to allow the Pentagon to use Claude for fully autonomous weapons systems or domestic mass surveillance programs.

Within hours of the Anthropic blacklisting, OpenAI signed a Pentagon contract — essentially stepping in to take the deal Anthropic had declined.

The backlash was immediate and larger than anything previously seen in the AI industry:

  • Over 2.5 million users signed pledges to cancel or delete ChatGPT accounts
  • ChatGPT uninstall rates jumped 200-295% above baseline
  • Claude hit the #1 position on the Apple App Store as users migrated
  • OpenAI employees circulated an internal open letter expressing solidarity with Anthropic
  • Sam Altman publicly called the rollout "opportunistic and sloppy" and announced renegotiation
  • An amended deal was published, now explicitly prohibiting "domestic surveillance of U.S. persons" — but the full contract text remains undisclosed

Why Anthropic was targeted

Anthropic's Constitutional AI approach builds refusal into the model at a fundamental level. The company has a published policy against use cases involving autonomous lethal targeting and mass population surveillance. This is not a terms-of-service soft limit — it is architecturally enforced and documented in Anthropic's model cards and usage policies.

The Pentagon wanted capabilities that Anthropic's architecture was explicitly designed to prevent. The blacklisting followed the refusal.

Why OpenAI accepted

OpenAI has been under significant financial pressure in 2026. The company is burning capital at scale, has delayed its IPO timeline, and has been in competition with Google DeepMind and Anthropic for enterprise and government contracts. The Pentagon deal represents both revenue and strategic positioning — government AI contracts tend to be long-term and come with follow-on work.

Sam Altman's post-facto admission that the execution "looked opportunistic" suggests internal disagreement about the optics, but not about the commercial decision itself.

What the amended deal says — and doesn't say

The published amendment explicitly bans:

  • "Domestic surveillance of U.S. persons without judicial authorisation"
  • "Autonomous lethal targeting without human in the loop review"

What it does not address:

  • Surveillance of non-U.S. persons
  • Autonomous targeting in declared conflict zones
  • Intelligence gathering and analysis use cases
  • The definition of "human in the loop" — does reviewing a list of 50,000 AI-flagged targets at the rate of 2 per second constitute meaningful human review?

The full contract text remains classified.

The developer decision: which AI tools do you actually trust?

This incident crystallised a question that has been building in the developer community for years: when you build on an AI platform, whose values are you inheriting?

Three frameworks for thinking about this:

*Framework 1 — Capability first.* The Pentagon deal does not affect the API you use to build apps. GPT-4o's code generation quality is unchanged. If you are building a consumer product or developer tool with no defence or surveillance exposure, the ethics of OpenAI's government contracts may be largely irrelevant to your work.

*Framework 2 — Platform risk.* Every API provider makes decisions that can affect your application's future. OpenAI has changed pricing, deprecated models without notice, altered rate limits, and now signed a government contract that created public controversy. Anthropic, Mistral, and open-source alternatives exist. Diversifying your AI provider dependencies reduces platform risk — not because one company is "good" and another is "bad," but because concentration creates fragility.

*Framework 3 — Values alignment.* If you are building in regulated industries — healthcare, finance, legal, public infrastructure — your customers will increasingly ask about your AI vendor's ethics policies. An enterprise customer's legal or compliance team asking "which AI models does your product use, and what are their military use policies?" is now a realistic scenario. Having a defensible answer matters for enterprise sales.

What actually happened to Claude

The Claude App Store ranking tells a clear story about user sentiment. The people who moved from ChatGPT to Claude in the 48 hours following the Pentagon deal announcement were not primarily developers evaluating technical capability. They were consumers making a values statement.

Whether they stay depends on whether Claude matches their expectations for quality. The conversion event was the boycott; retention depends on the product.

For developers: Claude 3.5 Sonnet and Haiku remain strong alternatives for most code generation, analysis, and reasoning tasks. The Anthropic API pricing is competitive. The context window (200K tokens), tool use, and computer use capabilities are comparable or superior to GPT-4o on many benchmarks.

The broader question no one is answering

The Pentagon deal debate is a proxy for a larger structural question: should AI companies have the right to unilaterally decide which use cases their technology enables, even when national security customers are involved?

Anthropic's position says yes — companies should be able to refuse use cases that violate their values, even government contracts. The blacklisting was the government's response: if you won't serve us, we will create commercial and regulatory pressure until you do or are replaced.

OpenAI's position, implicit in its decision, says that AI companies are infrastructure providers with obligations to serve legitimate government customers, and that the government — not the company — should determine acceptable use within legal bounds.

Both positions have coherent arguments. Neither has been tested in court.

What is clear: this will not be the last time an AI company faces this choice. As AI becomes more capable and more embedded in critical systems, the pressure to grant government access to capabilities previously refused will only increase.

The developer community's response — 2.5 million boycott pledges — suggests users believe companies should have the right to refuse. Whether that belief translates to sustained behaviour change remains to be seen.

Free Tool

Will AI replace your job?

4 questions. Get a personalised developer risk score based on your stack, role, and what you actually build day to day.

Check Your AI Risk Score →
ShareX / TwitterLinkedIn

Written by

Abhishek Gautam

Full Stack Developer & Software Engineer based in Delhi, India. Building web applications and SaaS products with React, Next.js, Node.js, and TypeScript. 8+ projects deployed across 7+ countries.

Free Weekly Briefing

The AI & Dev Briefing

One honest email a week — what actually matters in AI and software engineering. No noise, no sponsored content. Read by developers across 30+ countries.

No spam. Unsubscribe anytime.