ARTICLE AD BOX
![]()
Representative Image. In pic: Anthropic CEO Dario Amodei
A former Pentagon official has reportedly warned that cutting off the military’s access to Anthropic’s artificial intelligence (AI) tools could weaken the US’s military advantage over China.
The warning comes as AI becomes more crucial, especially in defence operations. The US Department of War has restricted Anthropic’s role due to an ongoing dispute about its AI systems. The Pentagon has labelled Anthropic, the maker of Claude, as a “national security risk”—the first US company to receive this designation.Michael Horowitz, a former Pentagon official, said such experience is a key factor distinguishing the US military from rivals such as China.
“One of the biggest differences between the United States military and China's military is America's extensive operational experience. This just adds to the ledger,” Horowitz told Axios.However, Horowitz warned that the unresolved dispute between Anthropic and the Pentagon could hinder US access to advanced AI technology, potentially impacting military strategy. He added that banning Anthropic’s AI tools could reduce the value gained from the US military’s operational experience with these technologies.
This follows reports of Anthropic’s AI tools used in two military contexts: operations linked to Nicolás Maduro in Venezuela and activities involving Iran. These deployments gave the US military operational experience with AI in real-world conditions. The warning comes amid continuing tensions between the US government and Anthropic over safeguards limiting the company's AI use in surveillance and autonomous weapons.
Why has the Pentagon banned Anthropic’s AI tools in military operations
The dispute between Anthropic and the Pentagon focuses on when and how Anthropic’s AI tools would be implemented. The Department of War maintains that these restrictions could hinder military operations and jeopardise troop safety, arguing that operational delays and reduced functionality may put personnel at risk.At the American Dynamism Summit in Washington this week, Pentagon’s chief technology officer Emil Michael said, “As I started to look at the contracts that had been written during the last administration for the use of AI, I had a whole 'holy cow' moment. [There were] dozens of restrictions, and yet these AI models were baked into some of the most sensitive and important places in the US military, where we do exercise combat power.”Pentagon head Pete Hegseth called the decision to blacklist Anthropic “final,” saying the company's ties to the government and military are “permanently altered.” The Department of War has long used AI, autonomy, and automation for functions ranging from intelligence analysis and image recognition to drone warfare and administrative decisions. However, these applications do not replace testing technology in combat conditions.However, losing Claude may not translate directly into a loss of AI advantage. Steven Feldstein of the Carnegie Endowment for International Peace told Axios, “While ripping out Anthropic and replacing it with a comparable model ... brings some disruption, I think the technology is enough in its infancy that putting in place alternative systems will be sufficient to support DOD's overall military AI objectives.
”


English (US) ·