Google Gave the Pentagon's 3 Million Military Employees an AI Agent Builder

Abhishek Gautam··8 min read

Quick summary

Google launched Agent Designer inside Gemini for Government, letting 3 million US military employees build their own AI agents. This is what Anthropic refused to provide.

Google just handed the US military the ability to build its own AI agents at scale. The feature is called Agent Designer, it lives inside Gemini for Government, and it is now available to approximately three million military personnel across the Department of Defense. Employees can use it to build custom AI agents for unclassified tasks without writing code, without IT approval for each deployment, and without waiting for a vendor to build the tool they need.

This is a significant capability expansion, and it lands on the same day that Nvidia declared the inference era at GTC 2026 and the same week that Anthropic is suing the DoD for designating it a supply-chain risk over exactly this kind of use case: giving military personnel direct access to AI capabilities without explicit human oversight on every operation.

What Agent Designer Actually Does

Agent Designer is a no-code interface inside Gemini for Government that lets users define an AI agent's objective, give it access to specific data sources and tools, and deploy it to run tasks autonomously. The initial scope is unclassified tasks — document summarisation, scheduling, research, communications drafting, workflow automation.

The architecture follows the same pattern as Microsoft's Copilot Studio and Salesforce's Agentforce: a business user defines what the agent does, a platform handles the model and infrastructure, and the agent runs without a developer needing to write or maintain code. The difference in this case is the user base: three million military employees across the US Army, Navy, Air Force, Marines, Space Force, and their civilian support workforce.

Scale changes everything. A feature available to three million users who each create one agent generates three million active AI deployments. Agents interacting with internal military communications systems, logistics databases, and operational planning documents. Agents that summarise, prioritise, draft, and decide — autonomously, on unclassified infrastructure, with varying degrees of human review depending on how each user configures their deployment.

The Contrast with Anthropic's Position

The context here is important and the timing is deliberate. Anthropic refused to allow the DoD to use Claude for mass surveillance of Americans or for autonomously making weapons-related decisions without human oversight. The DoD designated Anthropic a supply-chain risk and signed with OpenAI. Anthropic is now suing the DoD over that designation.

Google's Agent Designer for military personnel is not the same thing as what Anthropic refused. The use cases are unclassified and civilian-facing. The agents are not weapons systems. The deployment is for administrative and productivity workflows, not targeting or surveillance.

But the direction matters. Today's Agent Designer enables three million users to build productivity agents. Tomorrow's version — or next year's version — could expand to classified systems, operational planning, or intelligence workflows. Google has placed itself as the trusted AI infrastructure partner for US government and military, and the relationship it builds now determines what it can offer later.

Anthropic drew a line and stayed behind it. Google is building the relationship. These are two different bets on how the AI-government dynamic will evolve, and both carry real consequences.

Why Three Million Users Changes the Threat Surface

Security researchers have been warning about agentic AI risks since autonomous agents entered the market in 2023. The McKinsey Lilli breach — where an autonomous AI agent accessed 46.5 million messages via SQL injection in two hours — illustrates exactly what can go wrong when enterprise AI platforms are deployed without adequate security architecture.

Now imagine that attack surface replicated across the US military's unclassified infrastructure. Three million users, each potentially deploying one or more agents, each agent with access to internal military data sources and communication systems. The security requirements for that deployment are fundamentally different from a corporate productivity tool.

The agents need to be isolated from each other to prevent lateral movement. The data sources they access need strict access controls that limit each agent to only the data its creator is authorised to see. The actions agents can take need to be logged, auditable, and reversible. The prompts defining agent behaviour need to be stored separately from the data they access — not in the same database, as happened at McKinsey.

Google has the infrastructure capability to implement these controls at scale. Whether it has implemented them in the current Agent Designer deployment is not publicly confirmed.

What Google Gets From This

The strategic value of Gemini for Government is not the contract revenue from the initial deployment. It is the data on how three million users interact with AI agents in a complex, high-stakes operational environment. That behavioural data — at a scale no commercial AI deployment has achieved — trains Google's understanding of what enterprise AI agents actually do, what they fail at, and where the security boundaries need to be.

It also builds switching costs. Government agencies that standardise their AI workflows on Gemini infrastructure are not going to migrate to Claude or GPT when the next contract cycle comes around. The productivity tools people build with Agent Designer become embedded in operational processes. The institutional knowledge of how to use the system accumulates in the people using it. Switching costs in enterprise software are high; in government software, they are higher.

Google is buying a long-term infrastructure relationship with the US federal government at the price of building Agent Designer and offering it inside an existing Gemini for Government subscription. That is a remarkably efficient land-and-expand strategy.

The AI Industry's Government Positioning Race

The US federal government is one of the largest enterprise software buyers in the world. The DoD alone has an annual IT budget exceeding $50 billion. The AI contracts being signed now — by OpenAI, Google, Microsoft, and Oracle — are the opening moves in a decade-long competition for that spend.

Microsoft has the established position through its existing government cloud contracts (Azure Government) and the M365 integrations that already run across federal agencies. Google is using Gemini for Government to build a competing relationship with military users specifically. OpenAI signed the Pentagon deal to establish its position after Anthropic's refusal opened the door. Anthropic is betting that its ethics positioning makes it the preferred vendor for government use cases that require explicit human oversight.

Each company is making a different bet. Microsoft bets on infrastructure incumbency. Google bets on agent capability and scale. OpenAI bets on first-mover access. Anthropic bets on responsible AI positioning in a market that will eventually demand it.

Developers building tools for government or regulated-industry customers need to understand which AI vendors are in which government ecosystems — because the API you build on today determines the compliance requirements and the competitive dynamics you'll face when procurement teams ask who your AI provider is.

Key Takeaways

  • Google launched Agent Designer inside Gemini for Government — giving 3 million US military employees a no-code AI agent builder
  • Unclassified use cases today: document processing, scheduling, research, communications drafting, workflow automation
  • Direct contrast with Anthropic: Google built the relationship while Anthropic drew the ethical line — both are deliberate strategic choices
  • Security implications: 3 million agent deployments create a large attack surface requiring isolation, access controls, and audit logging the McKinsey breach shows enterprises often skip
  • Strategic value for Google: behavioural data at scale, switching costs, and a foothold for expanding into classified and operational AI workflows
  • The government AI race is underway: Microsoft (infrastructure), Google (agents), OpenAI (access), Anthropic (ethics positioning) are each betting on different paths to the same long-term contract value
  • For developers in regulated industries: your AI vendor's government relationships are now a procurement consideration

Free Weekly Briefing

The AI & Dev Briefing

One honest email a week — what actually matters in AI and software engineering. No noise, no sponsored content. Read by developers across 30+ countries.

No spam. Unsubscribe anytime.

Free Tool

Will AI replace your job?

4 questions. Get a personalised developer risk score based on your stack, role, and what you actually build day to day.

Check Your AI Risk Score →
ShareX / TwitterLinkedIn

Written by

Abhishek Gautam

Full Stack Developer & Software Engineer based in Delhi, India. Building web applications and SaaS products with React, Next.js, Node.js, and TypeScript. 8+ projects deployed across 7+ countries.