ARTICLE AD BOX
In Washington, a feud has erupted between the US Department of War and the AI company Anthropic, maker of the Claude model. The kerfuffle sounds like a classic corporate-state standoff, with Anthropic CEO Dario Amodei walking away from a $200 million deal, refusing to drop contractual red lines that prohibited its AI. The Pentagon labelled the company a “supply-chain risk,” effectively barring contractors from using its technology in military work.
The dispute is not about money or performance issues, but about ethics and control. Anthropic insisted that its models cannot be used for mass domestic surveillance or fully autonomous weapons, while the Pentagon wants the right to deploy AI for ‘all lawful purposes,’ which presumably includes these. When the company refused to loosen those restrictions, the relationship collapsed, the Pentagon blacklisted the firm, and Sam Altman’s OpenAI, seemingly less concerned about these issues, signed a deal the very same day.
The surprising twist in this story is that the model at the centre of the dispute, Claude, has reportedly already been used in real military operations, including a US raid in Venezuela targeting President Nicolás Maduro, and is currently being used in Iran!
The most terrifying question is how can a Large Language Model (LLM), a technology famous for ‘hallucinating’ facts and being ‘confidently wrong,’ be trusted with the surgical precision required for high-stakes warfare? In matters of war and the fog surrounding it, no one will know exactly how. But, based on reporting, as well as the known capabilities of modern AI systems, several plausible uses emerge:
The Analyst: Claude’s long-context window allows it to ingest thousands of satellite images, SIGINT (signals intelligence), and open-source reports simultaneously. It doesn’t just ‘read’ them, but produces vulnerability scores, condensing days of human analysis into minutes to perhaps identify which centrifuge hall at Natanz or Fordow, Iran’s uranium enrichment sites, is most susceptible to a kinetic strike.
The Spotter: In the operation that led to the capture of Nicolás Maduro in Venezuela, reports suggest AI was used to distinguish between real bunkers and elaborate decoys. In West Asia, the Times of Israel reports that US forces used Claude to assist in the lethal strike on Ayatollah Ali Khameini. An LLM can process drone footage or satellite imagery and classify structures, distinguishing a bunker from a decoy, identifying vehicles, or spotting infrastructure hidden under camouflage.
The Wargamer: Before a single drone entered Iranian airspace, the military likely ran thousands of Monte Carlo simulations, which use repeated random sampling to model uncertainty and estimate outcomes in complex systems. Claude can orchestrate these simulations rapidly, running probabilistic forecasts and producing scenario briefs for planners.
The Navigator: AI is excellent at optimisation problems. It can assist with the logistical puzzle of modern warfare: routing drones, coordinating aircraft, avoiding radar coverage, and synchronising strikes.
The Copilot: During an operation like the Iran strikes, commanders receive a firehose of information: sensor data, electronic warfare signals, and communications intercepts. An AI model can give real-time recommendations to pilots and commanders, suggesting instant adjustments to drone and flight patterns.
Popular imagination might envisage AI running wars like a sci-fi general, but the reality is far more prosaic. Militaries rarely rely on a single system to make battlefield decisions. What they deploy instead are complex decision-support ecosystems with image-analysis models, simulation software, logistics optimisers, radar algorithms, and human analysts working together. LLMs plug into that ecosystem as ‘cognitive middleware’ to summarise data, coordinate workflows, and help humans think faster. For example, Claude is being integrated into broader battle-management ecosystems like Palantir’s Maven Smart System.
The direction of travel is unmistakable: We have entered the era of the AI Copilot for War. For centuries, the decisive advantage in war came from better artillery, faster aircraft, or more accurate missiles. Now, the race may increasingly be about who can think faster and more accurately with machines.
The dispute between Anthropic and DoW is only a precursor to a much larger philosophical question — not about whether machines can help us fight, but about how much of our morality we are willing to outsource to an algorithm. Because, in the wars of the future, the most powerful weapon may not be the missile or the drone, but the algorithm that decides where they go. Retrieval-Augmented Generation systems are meant to prevent the model from dreaming up facts and using only specific data provided by military sensors to contain hallucinations. Humans, too, remain in the loop, at least for now. AI suggests the target, but a human colonel still authorises the trigger. But the most dangerous part of a ‘hallucinating’ AI isn’t that it makes a mistake, it is that no human may have the time to stop it.
Bindra is founder of AI & Beyond
Facebook Twitter Linkedin Email
Disclaimer
Views expressed above are the author's own.


English (US) ·