Anthropic vs OpenAI Pentagon Deal: Dario Amodei Called It a Lie

Abhishek Gautam··9 min read

Quick summary

Dario Amodei called OpenAI DoD messaging mendacious and safety theater in an internal memo. Why Anthropic refused and OpenAI accepted the Pentagon deal.

On March 4, 2026, Dario Amodei sent a memo to Anthropic staff. The Information obtained a copy and published excerpts. The language is not what you typically see from a CEO.

Amodei called OpenAI's messaging "mendacious" and "safety theater." He said Sam Altman's public comments were "straight-up lies" and "gaslighting."

This is the CEO of a $60 billion AI company, in writing, calling his primary competitor's CEO a liar — in a document that was always going to leak.

To understand why, you need the full sequence of events.

The DoD Contract: What Both Companies Were Offered

The US Department of Defense wanted access to frontier AI models, the kind OpenAI and Anthropic build, for a wide range of military and national security applications.

The contracts on offer were significant. Reports put the Anthropic negotiation at approximately $200 million in initial value, with potential for expansion.

The sticking point: the Pentagon wanted access under a "any lawful purpose" clause. In plain terms: the military wanted to use the AI for anything that does not break US law. That sounds reasonable until you think about what US law permits the military to do.

Specifically, Anthropic was concerned about two categories:

Autonomous lethal weapons — AI systems that identify and engage targets without a human in the decision loop. Fully autonomous weapons are not illegal under current US law. They are prohibited by Anthropic's usage policies.

Mass domestic surveillance — AI-assisted surveillance of US citizens at scale. Also legal under certain interpretations of US law. Also prohibited by Anthropic's usage policies.

Anthropic's position: we will work with the military on logistics, intelligence analysis, research, communications, and a range of other applications. We will not provide access that could be used for autonomous lethal decision-making or mass surveillance of civilians.

The Pentagon's position: we need "any lawful purpose" access. We cannot accept categorical carve-outs that limit operational flexibility.

The negotiations failed.

The Pentagon's Response

The Department of Defense did not simply walk away. On February 26, 2026, the Pentagon formally designated Anthropic a "supply-chain risk" — a classification typically applied to foreign vendors or companies with ties to adversarial governments.

Applying this designation to a US AI company, founded by former OpenAI researchers, headquartered in San Francisco, was extraordinary. It was also clearly a pressure tactic.

Dario Amodei's response, on the same day: "These threats do not change our position."

What OpenAI Did

OpenAI accepted the DoD contract.

The agreement, which OpenAI posted on its website, grants military access under a framework described as "all lawful purposes." OpenAI added language stating that the deal includes human oversight requirements and that OpenAI's policies continue to apply. OpenAI also renamed the "Department of Defense" to the "Department of War" in its own blog post — a messaging choice that signalled awareness of the political sensitivity.

Sam Altman framed the deal publicly as responsible AI deployment with appropriate safeguards. He argued that it was better for the US military to use safe, well-aligned AI than to use less safe alternatives.

Amodei's Memo: What He Actually Said

The Anthropic memo, as reported by The Information, characterises OpenAI's framing as dishonest in specific terms.

Amodei reportedly wrote that OpenAI's messaging was "mendacious", meaning designed to mislead. He called the safety framing "safety theater" — a performance of safety concern without actual safety constraints. He described multiple statements from Altman as "straight-up lies" and "gaslighting."

The core of the critique: OpenAI's "all lawful purposes" agreement does not actually prevent the military uses Anthropic refused. The human oversight language is non-binding. The claim that OpenAI's policies apply is meaningless if those policies permit military to override them for national security purposes. By calling this responsible deployment, OpenAI is giving safety cover to an arrangement that has no real safety teeth.

OpenAI has not publicly addressed the memo's specific claims.

The Safety Philosophy Divide

This dispute makes the philosophical difference between Anthropic and OpenAI concrete.

Both companies were founded with AI safety as a stated mission. Anthropic was literally founded by former OpenAI researchers, including Dario and Daniela Amodei, who left because they believed OpenAI was prioritising capability over safety.

The Pentagon dispute is the clearest test of whether that safety commitment is real or rhetorical.

Anthropic's position: there are uses we will not support, regardless of commercial cost, regardless of government pressure, regardless of competitive disadvantage if a competitor accepts.

OpenAI's position: engagement with government and military is preferable to leaving that space to less safety-conscious alternatives. Staying in the room is safer than walking out.

Both positions are internally coherent. Both have serious problems.

Anthropic's position risks AI being deployed militarily in forms Anthropic cannot influence at all. If the Pentagon uses GPT-4o-based systems for autonomous targeting because Claude refused, has Anthropic made the world safer?

OpenAI's position risks the "all lawful purposes" framework becoming a precedent. If the leading AI safety company says military autonomous weapons are acceptable, it makes that normalisation permanent.

The Negotiations Resume

The situation is not static. As of March 5-6, 2026, Anthropic and the Pentagon are back at the negotiating table.

Dario Amodei is reportedly in direct talks with Emil Michael, the undersecretary of defense for research and engineering. The FT reported that Anthropic is seeking contract language that explicitly prohibits autonomous lethal weapons use and mass domestic surveillance — the specific carve-outs the Pentagon had previously refused.

Whether the Pentagon will accept those specific carve-outs remains unresolved. The commercial stakes are significant. The strategic stakes are larger.

What This Means For The AI Industry

The Pentagon dispute is going to define how AI companies relate to military contracts for the next decade.

If Anthropic secures a deal with specific use-case exclusions, it establishes a precedent that AI companies can say no to specific military applications — and that governments will accept those terms.

If Anthropic fails to reach a deal while OpenAI succeeds, it establishes that the cost of maintaining meaningful safety limits is exclusion from government contracts. That creates enormous pressure on every future AI company to accept "any lawful purpose" terms or lose to competitors who will.

The internal memo calling OpenAI's deal "straight-up lies" is not just corporate rivalry. It is Amodei making explicit, in writing, that he believes OpenAI crossed a line that Anthropic will not cross.

Whether that line holds depends on what happens next in Washington.

Key Takeaways

  • $200 million — approximate value of the Pentagon contract Anthropic walked away from
  • March 4, 2026 — Dario Amodei internal memo calls OpenAI messaging mendacious, safety theater, and straight-up lies
  • February 26, 2026 — Pentagon formally designates Anthropic a supply-chain risk after negotiations collapse
  • The sticking point: Pentagon demanded any lawful purpose access; Anthropic refused autonomous weapons and mass domestic surveillance use cases
  • OpenAI accepted: all lawful purposes contract with non-binding human oversight language
  • March 5-6, 2026 — Anthropic back at table with Emil Michael, undersecretary of defense for research and engineering
  • For developers: Anthropic usage policies prohibit autonomous weapons and mass surveillance — these limits apply to all Claude API applications
  • What to watch: whether resumed talks produce explicit contractual carve-outs — sets the precedent for all future government AI contracts

For Developers Using Claude

If you are building on Claude through Anthropic's API, this dispute has a practical implication: Anthropic's usage policies — which exclude autonomous weapons and mass surveillance — apply to your applications too.

For most developers, this is invisible. Your application does not involve autonomous weapons.

But it is worth understanding that when you build on Claude, you are building on a model whose company has refused a $200 million government contract rather than allow its technology to be used in ways it considers unsafe. That is not marketing. It is a documented, costly decision made in public, under government pressure.

Whether that matters to you depends on what you are building and for whom.

Key Takeaways

  • $225M to $5.5B — Situational Awareness LP grew its reported US equity exposure 21x in roughly 12 months
  • Core Scientific (CORZ): $418.7M — largest single holding, a Bitcoin miner converting to AI data centre colocation
  • Columbia valedictorian at 19 — Aschenbrenner entered university at 15, graduated at 19 with economics and math-statistics
  • Fired at 22 — OpenAI terminated him in April 2024; he says the reason was a security memo warning about Chinese espionage targeting OpenAI research
  • The thesis: AI infrastructure bottleneck is electricity and grid access, not GPU chips — Bitcoin miners own both
  • For developers: if AGI arrives by 2027-2028 as Aschenbrenner argues, every application built on current AI capabilities needs redesigning within 24 months
  • What to watch: Situational Awareness LP Q2 2026 13F SEC filing — will show whether the fund is adding to Bitcoin miner positions or rotating

More on AI

All posts →
AIEconomy

A Federal Reserve Governor Said AI Could Make Workers "Essentially Unemployable." Nobody Has a Plan.

On February 18, a Federal Reserve governor stated publicly that mass AI unemployment is "totally possible." The EU has regulation. China has regulation. The US has nothing. Here is what the policy vacuum actually looks like.

·7 min read
AIBusiness

The US Government Just Blacklisted an American AI Company for Refusing to Remove Safety Guardrails

On February 27, 2026, the Pentagon formally designated Anthropic a 'supply chain risk to national security' — the first time this label has ever been applied to a domestic US company. Anthropic refused to allow autonomous weapons and mass domestic surveillance. Hours later, OpenAI signed a Pentagon deal with the same guardrails. Here is what actually happened and why it matters globally.

·10 min read
AIStartups

MyFitnessPal Just Acquired CAL AI — the Calorie App Two Teenagers Built That Went Viral. Here Is What Happened.

MyFitnessPal acquired CAL AI, the viral AI-powered calorie tracking app built by teen founders Zach Yadegari and Henry Langmack. Here is the acquisition story and what it means for health tech and indie developers.

·5 min read
AIStartups

Leopold Aschenbrenner: Fired by OpenAI at 22, Now Manages $5.5 Billion

Leopold Aschenbrenner graduated Columbia at 19, got fired by OpenAI at 22, published a 165-page AGI manifesto, and built a $5.5B hedge fund in one year betting on Bitcoin miners, not AI models.

·9 min read

Free Tool

Will AI replace your job?

4 questions. Get a personalised developer risk score based on your stack, role, and what you actually build day to day.

Check Your AI Risk Score →
ShareX / TwitterLinkedIn

Written by

Abhishek Gautam

Full Stack Developer & Software Engineer based in Delhi, India. Building web applications and SaaS products with React, Next.js, Node.js, and TypeScript. 8+ projects deployed across 7+ countries.

Free Weekly Briefing

The AI & Dev Briefing

One honest email a week — what actually matters in AI and software engineering. No noise, no sponsored content. Read by developers across 30+ countries.

No spam. Unsubscribe anytime.