Trump Terminates Anthropic from All US Government Contracts — What the "Any Lawful Use" AI Rule Means for Developers
Quick summary
The Trump administration removed Anthropic from all US government procurement on February 27, 2026, after Anthropic refused Pentagon "unrestricted use" demands. New draft rules now require AI vendors to license models for "any lawful use" with no ideological guardrails. Here's what this means for developers building with AI APIs and enterprise contracts.
On February 27, 2026, the Trump administration ordered all federal agencies to immediately cease use of Anthropic's AI technology, initiating a six-month phase-out. The General Services Administration removed Anthropic from the USAi.gov portal and terminated its Multiple Award Schedule contract — the pre-negotiated vehicle through which civilian government agencies procure software, with FY25 sales exceeding $52.5 billion.
What started as a standoff between the Pentagon and Anthropic over usage restrictions has become a full US government ban on Anthropic technology, a new draft procurement rule that would reshape how AI companies can sell to the government, and a legal challenge from Anthropic that is heading toward the courts.
For developers, the consequences reach far beyond Washington.
How It Started: The Pentagon's "Unrestricted Use" Demand
The friction began in late 2025. Anthropic, as part of a roughly $200 million multi-vendor Pentagon contract signed in July 2025, included usage restrictions in its contract terms that the Department of Defense found unacceptable.
Anthropic's position, articulated publicly by CEO Dario Amodei, was that there were two specific uses it would not permit: mass surveillance of US persons and fully autonomous lethal weapons systems without meaningful human oversight. These were not vague ethical guidelines — they were specific, operational red lines written into the contract.
The Pentagon's view was that these restrictions made Anthropic a supply-chain risk. On March 5, 2026, the DoD formally designated Anthropic with that classification — a label typically reserved for foreign vendors with national security concerns. All contractors were barred from using Anthropic technology on US military work.
The Government Response: "Any Lawful Use"
Within days of the Pentagon designation, the GSA published draft guidance for civilian AI procurement that reads like a direct response to the Anthropic dispute. Key requirements in the draft:
Irrevocable "any lawful use" license: AI vendors seeking federal contracts must grant the government an irrevocable license to use their models for any purpose that is lawful under US law. The government cannot accept models that come with operational restrictions.
Neutrality clause: The guidance explicitly requires AI tools to be "a neutral, non-partisan tool that does not manipulate responses in favour of ideological dogmas such as diversity, equity, inclusion." Vendors must demonstrate their models do not apply politically motivated filters.
EU compliance disclosure: If a vendor has modified its model to comply with non-US regulations — specifically the EU AI Act or Digital Services Act — they must disclose what changes were made and how they affect model behaviour for US government users.
These rules are currently draft guidance for civilian contracts managed by the GSA. The Pentagon is separately considering equivalent requirements for military procurement.
The Agency Reshuffling
The practical consequences are moving fast. Multiple US agencies that had deployed Anthropic's Claude are now transitioning away:
| Agency | Move |
|---|---|
| State Department | Switched to OpenAI GPT-5 platform |
| Treasury | Phasing out Anthropic over 6-month window |
| HHS | Transitioning to alternative vendor |
| FHFA | Phasing out Anthropic |
| DoD contractors | Barred from Anthropic immediately |
OpenAI moved quickly to fill the gap, announcing a dedicated Pentagon AI deal shortly after the Anthropic exclusion. Microsoft, Google, and Palantir — all of which have existing DoD relationships and fewer usage restriction clauses — are the other primary beneficiaries.
Anthropic's Response and Legal Position
Anthropic has announced it will challenge the DoD's supply-chain risk designation in court. The company's argument is that the designation is legally improper — the supply-chain risk classification is specifically designed for foreign vendors who may represent national security threats, not for US companies that decline to permit certain use cases.
The legal challenge raises a question that has not previously been tested: can the US government compel an AI company to remove safety restrictions as a condition of government contracts?
Dario Amodei has been direct about Anthropic's position. In a public statement, he said the company would not accept contractual terms that permitted mass surveillance or fully autonomous lethal systems, and that these were not commercial negotiating positions but safety commitments that Anthropic considers non-negotiable. Anthropic's Constitutional AI approach and its prominence in AI safety research give it an unusual position: it is simultaneously one of the most capable AI labs and one of the most publicly committed to restrictions on that capability.
The resolution of the legal challenge will set a precedent for every AI company doing business with the US government.
What "Any Lawful Use" Means for Developers
This policy shift has direct implications beyond government procurement. Here's why developers building commercial applications should care:
Model behaviour will diverge by market
If US government procurement requires models without guardrails, and EU law requires models with specific restrictions, AI companies are facing regulatory requirements that directly contradict each other. The GSA draft explicitly requires disclosure of EU compliance changes. This means US government-facing deployments of the same model may behave differently from commercial deployments, which may behave differently from EU deployments.
Developers building on top of AI APIs need to understand that the model behaviour they test against in development may not be identical to the model behaviour in production, depending on which contractual variant the API endpoint is serving.
Acceptable use policies are now geopolitical
The OpenAI approach — accepting Pentagon contracts with minimal restrictions — and the Anthropic approach — refusing specific military applications — are now visible market positions, not just terms of service fine print. Enterprise buyers, particularly those operating in regulated industries or with global customer bases, will increasingly evaluate AI vendors based on their contractual posture toward government and military use.
The DOGE procurement angle
The GSA's involvement is significant because the General Services Administration is the procurement backbone of the US federal government. A change to how GSA qualifies AI vendors ripples through every government contractor and subcontractor that touches federal work. The "any lawful use" rule, if finalised as written, would effectively bar Anthropic from the entire federal procurement supply chain — not just direct contracts.
API terms as a developer risk
If you are building a product on the Claude API for an enterprise customer who has US government clients, you now need to understand whether your product's acceptable-use downstream is covered by Anthropic's commercial API terms or would require a separate agreement. The gap between what Anthropic permits commercially and what the US government is now requiring is a genuine contract risk that enterprise developers need to evaluate.
The Broader AI Safety Debate
The Anthropic situation is exposing a tension that has been building since advanced AI capabilities emerged: safety restrictions that make AI companies trustworthy to the public make them less useful to governments that want unrestricted tools.
OpenAI's position — accepting DoD contracts — reflects one view: that AI should be available for all lawful government use and that safety can be maintained through operational oversight rather than contractual prohibition. Anthropic's position reflects the opposite: that some uses are sufficiently dangerous that the vendor must retain the right to prohibit them contractually, regardless of legality.
Both positions have coherent arguments behind them. The difference is that OpenAI is now the primary AI vendor to the US government, and Anthropic is preparing for litigation.
For the broader developer ecosystem, the immediate question is simpler: which model family do you build on, and what are the long-term implications of that choice if your product ends up serving customers in regulated sectors where these distinctions matter?
India and the Global Developer Community
India's developer community — which produces software for US government contractors, financial institutions, and healthcare companies at massive scale — is directly exposed to this policy shift. Indian IT firms like TCS, Infosys, and Wipro have active US government subcontracts. Any tool they deploy for US government work must now comply with "any lawful use" procurement requirements.
This is not hypothetical. The GSA MAS schedule that Anthropic was removed from is the primary vehicle through which Indian IT firms serving US federal clients procure software. If Anthropic is not on the approved vendor list, downstream subcontractors cannot use Claude for US government work.
The practical advice for Indian developers building for US government clients: audit your AI tool stack, understand which components are Anthropic products, and have a transition plan ready for the six-month phase-out window.
Free Weekly Briefing
The AI & Dev Briefing
One honest email a week — what actually matters in AI and software engineering. No noise, no sponsored content. Read by developers across 30+ countries.
No spam. Unsubscribe anytime.
More on AI Policy
All posts →ChatGPT Had 90% of the US Enterprise AI Market in 2025. Claude Now Has 70%. What Happened in 12 Months.
In February 2025, ChatGPT held 90% of the US business AI market. By February 2026, Claude enterprise share surged to nearly 70%. Here is what drove the shift and what it means for developers choosing AI platforms.
OpenAI Signed a Pentagon AI Deal Hours After Anthropic Was Blacklisted. What "Same Safeguards" Actually Means.
OpenAI will put its models on classified US military networks. Sam Altman says the Pentagon agreed to the "same safeguards" Anthropic refused to lower — mass surveillance and autonomous weapons. Here is the contrast and why it matters.
OpenAI Took the Pentagon Deal Anthropic Refused. 2.5 Million Users Are Quitting ChatGPT. Claude Hit #1.
Anthropic was blacklisted for refusing autonomous weapons access. OpenAI signed the same deal within hours. The backlash broke records — and sent users to Claude.
OpenAI, Anthropic, and SSI All Say They Are Building Safe AI. They Disagree on What That Means.
Three companies, three completely different theories of how to build powerful AI responsibly. OpenAI ships fast and figures out safety later. Anthropic wants to understand before deploying. SSI refuses to launch any product until safety is solved. Only one approach can be right.
Free Tool
Will AI replace your job?
4 questions. Get a personalised developer risk score based on your stack, role, and what you actually build day to day.
Check Your AI Risk Score →Written by
Abhishek Gautam
Full Stack Developer & Software Engineer based in Delhi, India. Building web applications and SaaS products with React, Next.js, Node.js, and TypeScript. 8+ projects deployed across 7+ countries.