The US Military Used Anthropic's Claude AI in Strikes on Iran — Hours After Trump's Ban

Abhishek Gautam··9 min read

Quick summary

Hours after the Trump administration banned Anthropic from Pentagon work citing national security concerns, US military operators used Claude AI in targeting and intelligence analysis for strikes on Iran. The contradiction that shocked the AI industry.

The contradiction was stark enough to stop the AI industry cold. Hours after the Trump administration formally designated Anthropic as a national security risk and barred the company from new Pentagon contracts, US military operators used Anthropic's Claude AI model in the intelligence analysis and targeting support pipeline for strikes on Iranian military facilities. Both things happened on the same day. The story has since become one of the most discussed episodes in the short history of AI governance — a case study in the gap between policy and operational reality.

What Happened

The Trump White House, advised by officials concerned about Anthropic's ties to non-US investors and its refusal to accept certain military use cases under its acceptable use policy, moved to classify Anthropic as a national security liability. The specific concerns cited included Anthropic's cap table (which includes Google and a range of international investors), Dario Amodei's public statements opposing certain lethal autonomous systems, and the company's refusal to remove safeguards that the Pentagon argued slowed time-sensitive military analysis.

The ban was framed not as a prohibition on Claude's use but on Anthropic receiving new federal contracts. However, existing deployments — including a classified programme in which Claude was integrated into signals intelligence and targeting analysis workflows — continued operating. The military personnel running these systems used Claude as designed during the Iran strike operations.

The contradiction: the White House banned Anthropic, and the military simultaneously used Anthropic's product to conduct the operations the ban was ostensibly about.

Why This Happened: The Operational Gap

Modern militaries move faster than procurement policy. Claude had been embedded in certain US military intelligence analysis workflows for months before the political dispute erupted. Once a model is integrated into a classified system, removing it requires testing replacement systems, retraining operators, updating security authorisations, and in some cases rebuilding software architectures. None of that happens in hours.

The military operators on the ground — intelligence analysts, targeting officers — did not receive a "stop using Claude" directive before the strikes. The policy decision was made at the civilian executive level; the operational system kept running.

This is not unique to AI. The US military regularly operates systems procured from contractors who are simultaneously involved in political disputes, export control fights, or sanctions proceedings. The operational timeline and the policy timeline rarely align.

Anthropic's Position

Dario Amodei and Anthropic have been the most vocal major AI lab in opposing direct use of AI in lethal targeting decisions. Anthropic's acceptable use policy explicitly restricts use of Claude in applications that "make autonomous or semi-autonomous lethal decisions without meaningful human oversight." The company has declined several military contracts on these grounds — which is partly what prompted the Pentagon's frustration and contributed to the Trump administration's characterisation of Anthropic as uncooperative.

The revelation that Claude was used in Iran strike operations put Anthropic in a publicly difficult position: their model was used in a context their own policies are designed to restrict. Anthropic responded by stating that the use appears to have violated their terms of service and that they are investigating what human oversight was present in the specific workflow. The company has been careful not to directly accuse the US government of violating their policies while the situation remains legally and diplomatically sensitive.

The Contrast with OpenAI

OpenAI took the opposite posture. The company has openly pursued Pentagon and intelligence community contracts, removed previous restrictions on military use from its terms of service, and accepted a framework in which "meaningful human oversight" is defined by the military rather than by OpenAI. The Pentagon deal that was announced the same week — giving OpenAI access to classified environments to build AI systems for military operations — was held up explicitly as the model for how AI companies should engage with national security customers.

The message from the Trump administration was clear: companies that accept military use cases (OpenAI) get government business; companies that restrict them (Anthropic) get designated as risks.

What This Means for AI Developers

The acceptable use policy question: Every major AI lab now has an acceptable use policy. These policies mean something — but they are enforced post-hoc, not pre-deployment. If your model is deployed in a system and that system is used in a context your AUP prohibits, your enforcement mechanism is investigation after the fact, not a technical block at deployment time. Developers building AI systems should understand this gap.

Safety vs. capability: The Anthropic situation has hardened a division in the AI industry between labs that prioritise constitutional/safety constraints (Anthropic, to a degree) and labs that prioritise capability and deployment breadth (OpenAI, xAI). Government contracts are flowing to the latter. The business incentives are now clearly aligned against safety-first positioning in the defence sector.

Global developer reaction: Developer communities in Europe, Canada, and India — where many developers choose Anthropic's Claude for its safety reputation — have reacted with a mix of concern about military use and sympathy for Anthropic's stated position. The contradiction of being banned for being too safe while simultaneously being used in military operations is striking. For developers choosing AI providers partly on ethical grounds, this episode complicates the decision.

Policy vs. technical controls: If you are building systems where use-case restrictions matter — medical, legal, financial, or ethically sensitive applications — this episode is a reminder that policy-layer restrictions are not technical-layer restrictions. If you need to prevent certain uses, build technical controls into your system architecture, not just terms of service.

The Broader Question

The episode raises a question the industry has avoided: can an AI model company meaningfully prevent its technology from being used in warfare once the model is deployed in commercial or government systems? The answer appears to be: not reliably. Once a capable general-purpose model exists, it will be used in contexts its creators did not intend or approve. This is the dual-use problem at scale.

For the AI governance debate, this is both a cautionary tale and an argument for proactive engagement. Anthropic's attempt to stay out of lethal military applications resulted in their technology being used in them anyway, while simultaneously losing government business to competitors with fewer restrictions. Whether that outcome was avoidable — and what a better path would have looked like — will be debated for years.

Free Tool

Will AI replace your job?

4 questions. Get a personalised developer risk score based on your stack, role, and what you actually build day to day.

Check Your AI Risk Score →
ShareX / TwitterLinkedIn

Written by

Abhishek Gautam

Full Stack Developer & Software Engineer based in Delhi, India. Building web applications and SaaS products with React, Next.js, Node.js, and TypeScript. 8+ projects deployed across 7+ countries.

Free Weekly Briefing

The AI & Dev Briefing

One honest email a week — what actually matters in AI and software engineering. No noise, no sponsored content. Read by developers across 30+ countries.

No spam. Unsubscribe anytime.