Palantir MAVEN: DoD Deploys AI That Closes Kill Chains in Minutes

Abhishek Gautam··12 min read

Quick summary

Palantir MAVEN Smart System replaced nine military systems with one AI targeting platform. It compressed kill chain decisions from hours to minutes.

In three weeks of Operation Epic Fury beginning February 28, 2026, the US military struck between 5,500 and 6,000 targets in Iran. The first 1,000 strikes happened in 24 hours. The system making those targeting decisions possible was not a new weapons platform. It was software: Palantir's Maven Smart System, a platform that fused nine separate military intelligence systems into one interface and compressed the kill chain -- the process from detecting a target to striking it -- from hours to minutes.

"Left click, right click, left click, magically it becomes a detection," said Cameron Stanley, the Pentagon's Chief Digital and AI Officer, demonstrating the system publicly in March 2026. "This is revolutionary."

What Is MAVEN Smart System?

Maven Smart System (MSS) is Palantir's AI-enabled platform for Combined Joint All-Domain Command and Control (CJADC2) -- the Pentagon's vision of connecting sensors, weapons, and decision-makers across all military domains (air, land, sea, space, cyber) in real time. It ingests data from satellite imagery, drone footage, signals intelligence, geolocation feeds, and other sources, fuses them into a single visualization interface, and provides decision support tools for targeting workflows.

As of March 2026, Maven Smart System has over 20,000 active users across 35 military service and combatant command software tools operating across three security classification domains. That user base doubled since January 2026. The contract ceiling reached $1.3 billion through 2029.

From Google to Palantir: How Project Maven Was Born and Handed Off

Project Maven did not start with Palantir. It started with Google.

In 2017, the Pentagon contracted Google to develop AI for Project Maven -- computer vision models to analyze drone video footage for target detection and vehicle tracking. The contract was relatively small, around $9 million, but the implications were not. In April 2018, nearly 4,000 Google employees signed an open letter to CEO Sundar Pichai demanding the company withdraw. Around a dozen resigned. The letter stated: "We ask that Project Maven be cancelled, and that Google draft, publicize and enforce a clear policy stating that neither Google nor its contractors will ever build warfare technology."

Google did not renew the contract.

The vacuum that created was Palantir's opportunity. Palantir had been building government and defense data infrastructure for over a decade. The company had no ethical constraint against military contracts -- in fact, its CEO Alex Karp had made clear that Palantir existed precisely to give Western democracies technological advantages over adversaries. In May 2024, Palantir won the Maven Smart System contract: $480 million over five years. By May 2025, the Pentagon raised the ceiling to $1.3 billion, citing "growing demand."

How MAVEN Smart System Works

The core problem MAVEN solves is fragmentation. Before it existed, military decision-makers had to monitor eight or nine separate software systems simultaneously to develop a complete picture of a battlefield. Analysts pulled data from one system, cross-referenced it in another, and manually moved detections between platforms to build a targeting case. The process was slow, error-prone, and dependent on human coordination at every handoff.

MAVEN collapses that into one interface with five core capabilities:

Data fusion: A single visualization tool that ingests multiple classified and unclassified data feeds simultaneously. Operators can select and deselect data layers, switch between analytical approaches, and see a unified operational picture without switching applications.

Detection to workflow: When an analyst identifies something of interest, a three-click sequence (left click, right click, left click, in Stanley's demo) converts it from a data point into a formal detection and moves it into a targeting workflow automatically.

Target classification: Once in the workflow, targets are classified by type across multiple columns, each representing a different decision-making process and rules of engagement framework.

Course of Action (COA) generation: The system automatically evaluates multiple factors to identify the best available asset to prosecute each target -- aircraft, missiles, or other systems -- and presents ranked options to the commanding officer.

Action: From COA selection, the system moves directly to execution. The entire sequence from detection to strike authorization happens within one platform.

The AI layer inside MAVEN also includes an LLM component. Claude, Anthropic's AI model, was used as an interface and synthesis layer -- helping analysts query massive datasets, summarize multi-source intelligence reporting, and translate raw data into assessable language for commanders. Claude also ranked targets by strategic importance and assessed expected impact of strikes.

The Kill Chain Compressed

The kill chain is the military targeting sequence known as F2T2EA: Find, Fix, Track, Target, Engage, Assess. In traditional warfare, a complete kill chain cycle could take 6 to 24 hours. Intelligence had to be gathered, analyzed, cross-referenced, passed up the chain of command, reviewed by legal advisors for compliance with rules of engagement, and then authorized before a strike could proceed.

At 1,000 strikes in 24 hours during Operation Epic Fury, the average time available per targeting decision was approximately 86 seconds.

That compression is what makes Maven Smart System strategically significant -- and what makes it ethically contested. The speed advantage is real and documented. The concern, raised by AI researchers, international law experts, and members of Congress, is that when the kill chain shrinks from hours to seconds, the nature of human oversight changes fundamentally. A commander approving a strike in 86 seconds on an AI-generated recommendation is making a different kind of decision than one who has reviewed hours of intelligence analysis. Representative Sara Jacobs stated: "AI tools are not 100% reliable -- they can fail in subtle ways and yet operators continue to over-trust them."

Operation Epic Fury: MAVEN in Combat

Operation Epic Fury began February 28, 2026, with US and Israeli strikes on Iranian military infrastructure. Admiral Brad Cooper, CENTCOM Commander, described Maven Smart System as central to the operation's execution. The system's ability to fuse intelligence from multiple domains and compress targeting cycles allowed the military to strike at a pace that would have been logistically impossible with legacy systems.

Over three weeks, 5,500 to 6,000 targets were struck. The operation was notable not just for scale but for the speed of execution -- a pace the Pentagon attributed directly to AI-assisted targeting.

Palantir CEO Alex Karp said in the aftermath: "AI precision targeting has fundamentally shifted modern warfare. Our adversaries are targeting AI infrastructure they cannot produce themselves."

The Anthropic Contradiction

The AI powering part of Maven's analytical layer -- Claude -- became a flashpoint in March 2026. The Pentagon had demanded Anthropic modify Claude to support "all lawful purposes," which included fully autonomous weapons targeting and domestic surveillance. Anthropic CEO Dario Amodei refused, citing two lines the company would not cross: fully autonomous lethal targeting without human authorization, and domestic surveillance of US citizens.

On March 1, 2026 -- one day before Operation Epic Fury's major strike phase -- the Trump administration designated Anthropic a "supply chain risk to national security" and ordered all federal agencies to phase out Claude use within six months.

CENTCOM used Claude for targeting analysis during the strikes anyway, as the six-month phase-out clock had not yet run.

The contradiction is precise: the AI company that refused to enable autonomous weapons was providing the analytical layer for the largest AI-assisted military operation in history, while being banned for refusing to go further.

International Law: Does MAVEN Cross a Legal Line?

The Law of Armed Conflict (LOAC) rests on three principles: distinction (combatants must be distinguished from civilians), proportionality (civilian harm must not be excessive relative to military advantage), and precaution (feasible steps must be taken to minimize civilian casualties).

Maven Smart System does not autonomously select or strike targets -- humans authorize each engagement. The Pentagon and Palantir maintain that this preserves "meaningful human control," the legal standard required under international humanitarian law.

Critics argue that meaningful human control at 86 seconds per decision is theoretical rather than real. The International Committee of the Red Cross (ICRC) defines Lethal Autonomous Weapon Systems (LAWS) as those that can "search for, detect, identify, select, and attack targets without meaningful human intervention." Maven technically requires human authorization. Whether a commander reviewing an AI recommendation in under two minutes constitutes meaningful intervention is the legal question no treaty has yet answered.

The Geneva Conventions (1949) and Additional Protocol I (1977) predate AI warfare entirely. UN discussions on LAWS have been ongoing for a decade with no binding treaty. The gap between the speed of AI-assisted military operations and the speed of international law is widening.

What Developers and Tech Workers Should Know

The Maven Smart System story is the clearest current example of where AI capability development leads when defense applications are the priority customer.

The architecture -- data fusion, agent-assisted analysis, automated workflow generation, decision support -- is not unique to military applications. The same patterns appear in financial trading systems, emergency response platforms, and supply chain management. What makes Maven different is the domain: the workflows end in kinetic action and the errors kill people.

For developers building AI systems that support human decision-making under time pressure, Maven raises a design question that is not abstract: at what speed does human oversight become nominal rather than real? A compliance officer approving 200 AI-generated flagging decisions per hour faces a structurally similar problem to a commander authorizing AI-recommended strikes at 86-second intervals. The human is in the loop. Whether they are meaningfully in the loop is different.

The tech worker movement response has been significant. An open letter demanding stricter limits on military AI use grew to nearly 900 signatures, including roughly 100 from OpenAI and 800 from Google. The 2018 Google Maven protest established a precedent -- employee pressure ended a contract. Whether that lever still works at the scale and speed of 2026 military AI deployment is an open question.

Key Takeaways

  • Palantir Maven Smart System replaced nine separate DoD systems with one AI platform under a $1.3 billion contract, now used by 20,000+ active military personnel
  • During Operation Epic Fury (Feb-Mar 2026), MAVEN enabled 5,500 to 6,000 strikes over three weeks, compressing kill chain decisions from hours to minutes
  • The system uses data fusion, automated detection-to-workflow conversion, AI-generated course of action recommendations, and an LLM layer (Claude) for intelligence synthesis
  • Claude was used for targeting analysis during the operation even after Anthropic was designated a national security supply chain risk for refusing to enable fully autonomous targeting
  • International law has no binding treaty covering AI-assisted kill chains -- the question of whether 86-second human authorization constitutes "meaningful human control" remains legally unresolved
  • For developers: the same decision-support architecture used in MAVEN appears in financial, medical, and logistics AI systems -- the question of nominal vs real human oversight applies across all domains
  • What to watch: Congressional legislation on DoD AI targeting safeguards currently in draft; UN LAWS treaty negotiations; whether Palantir expands Maven to NATO allies beyond current NATO MSS deployment

Free Weekly Briefing

The AI & Dev Briefing

One honest email a week — what actually matters in AI and software engineering. No noise, no sponsored content. Read by developers across 30+ countries.

No spam. Unsubscribe anytime.

More on AI

All posts →
AIChina

Inside China's AI Manhattan Project: Export Control Gaps and the Race to Build Sovereign AI

China is running the largest state-directed AI programme in history — often called its "AI Manhattan Project." But US and allied export controls have critical gaps. Here is how China is navigating restrictions, what the gaps are, and what this means for global AI competition.

·8 min read
AIPolicy

Compute Passports and Model Weight Controls: The Policy Battle That Will Decide Who Trains Frontier AI

Governments are debating whether to control AI model weights like weapons and require "compute passports" for frontier training runs. Here is what these proposals mean, who is pushing them, and how they could reshape access to advanced AI for developers worldwide.

·7 min read
AIEconomy

A Federal Reserve Governor Said AI Could Make Workers "Essentially Unemployable." Nobody Has a Plan.

On February 18, a Federal Reserve governor stated publicly that mass AI unemployment is "totally possible." The EU has regulation. China has regulation. The US has nothing. Here is what the policy vacuum actually looks like.

·7 min read
AIBusiness

The US Government Just Blacklisted an American AI Company for Refusing to Remove Safety Guardrails

On February 27, 2026, the Pentagon formally designated Anthropic a 'supply chain risk to national security' — the first time this label has ever been applied to a domestic US company. Anthropic refused to allow autonomous weapons and mass domestic surveillance. Hours later, OpenAI signed a Pentagon deal with the same guardrails. Here is what actually happened and why it matters globally.

·10 min read

Free Tool

Will AI replace your job?

4 questions. Get a personalised developer risk score based on your stack, role, and what you actually build day to day.

Check Your AI Risk Score →
ShareX / TwitterLinkedIn

Written by

Abhishek Gautam

Full Stack Developer & Software Engineer based in Delhi, India. Building web applications and SaaS products with React, Next.js, Node.js, and TypeScript. 8+ projects deployed across 7+ countries.