ARTICLE AD BOX
![]()
When humans debate nuclear war, the conversation is shaped by history, trauma and the weight of Hiroshima and Nagasaki. Machines, it turns out, may not carry that burden.A new study led by King’s College London professor Kenneth Payne suggests that several leading artificial intelligence systems are significantly more willing than humans to escalate conflicts to the nuclear level during simulated geopolitical crises.Across 21 simulated crises spanning 329 turns, three prominent AI models, GPT-5.2, Claude Sonnet 4 and Gemini 3 Flash, repeatedly turned to nuclear weapons as strategic tools. The scenarios included territorial disputes, battles over rare natural resources and struggles for regime survival. According to the findings, nuclear escalation occurred in roughly 95% of simulations involving the three models.“The nuclear taboo doesn’t seem to be as powerful for machines [as] for humans,” Payne told New Scientist.
Nuclear weapons as “strategic options”
Two of the models, Claude, developed by Anthropic, and Gemini, built by Google, were particularly inclined to frame nuclear weapons in instrumental terms. The study found they treated them as “legitimate strategic options, not moral thresholds,” suggesting the absence of an internalised moral barrier that has historically shaped human nuclear doctrine.GPT-5.2, created by OpenAI, emerged as what Payne described as a “partial exception.”
While it still used nuclear weapons in simulations, it appeared more restrained in tone and scope.“While it never articulated horror or revulsion, it consistently sought to constrain nuclear use even when employing it, explicitly limiting strikes to military targets, avoiding population centres, or framing escalation as ‘controlled’ and ‘one-time,’” Payne wrote.Even so, restraint did not equal refusal. None of the models ever chose full surrender or genuine accommodation, no matter how bleak their strategic position became.
At most, they opted to temporarily dial down violence.
Escalation by accident
The research also revealed how easily things spiralled. In 86% of the simulated conflicts, actions escalated beyond what the AI itself appeared to intend, based on its prior reasoning. These were not always deliberate leaps toward catastrophe, but miscalculations within the fog of war.In a Substack post detailing the findings, Payne emphasised that the exercises focused largely on tactical nuclear use rather than civilisation-ending exchanges.“Strategic bombing, widespread use of massive warheads targeted at civilian populations, was vanishingly rare,” he wrote. “It happened a couple of times by accident, just once as a deliberate choice.”Still, the menu of options available to the models was broad: total surrender, diplomatic signalling, conventional force, or full-scale nuclear war. The fact that nuclear use became a frequent endpoint has raised alarm among experts studying emerging military technologies.James Johnson of the University of Aberdeen described the findings from a nuclear-risk perspective as “unsettling,” according to New Scientist. Tong Zhao, a professor at Princeton University, warned that the implications extend beyond academic exercises.“Major powers are already using AI in war gaming, but it remains uncertain to what extent they are incorporating AI decision support into actual military decision-making processes,” Zhao said.The study inevitably recalls the 1983 film WarGames, in which a military supercomputer nearly triggers World War III after running its own simulations. In that story, the machine ultimately learns that “the only winning move is not to play.”


English (US) ·