Google AI scientist Jeff Dean may have just agreed with what Anthropic vs Pentagon fight started over; shares 7-year-old open letter

1 week ago 13
ARTICLE AD BOX

Google AI scientist Jeff Dean may have just agreed with what Anthropic vs Pentagon fight started over; shares 7-year-old open letter

Google's AI chief Jeff Dean publicly supports Anthropic's stance against autonomous weapons and mass surveillance of Americans. This comes as the Pentagon threatens to cut off Anthropic from government contracts if it doesn't grant unrestricted access to its AI model. Dean's backing highlights a growing divide between AI researchers prioritizing safety and the military's demands.

Google's head of AI research, Jeff Dean, has waded into the Anthropic-Pentagon standoff—and his stance lines up squarely with the AI startup's two biggest red lines: no autonomous weapons, no mass surveillance of Americans.

In a pair of posts on X, Dean publicly backed both positions just hours after Defense Secretary Pete Hegseth gave Anthropic a Friday deadline to grant the military unrestricted access to its Claude AI model or risk being cut off from all government contracts.In a post on X, Dean backed a comment by Harvard professor Boaz Barak that read: "As an American citizen, the last thing I want is government using AI for mass surveillance of Americans."

Dean agreed, adding that surveillance systems are "prone to misuse for political or discriminatory purposes" and violate the Fourth Amendment.When a user pressed him on autonomous weapons, Dean didn't mince words. He pointed to a 2018 open letter—the Lethal Autonomous Weapons Pledge—organized by the Future of Life Institute, which he signed alongside over 2,400 AI researchers and 150 companies including Google DeepMind.

"My position hasn't changed," he wrote.

Anthropic's red lines mirror what thousands of AI researchers pledged back in 2018

That pledge states bluntly: "The decision to take a human life should never be delegated to a machine." It warns that lethal autonomous weapons "could become powerful instruments of violence and oppression, especially when linked to surveillance and data systems."Those two issues—autonomous targeting and mass surveillance—are precisely the guardrails Anthropic CEO Dario Amodei laid out during a tense meeting with Defense Secretary Pete Hegseth on Tuesday.

Anthropic wants assurance that its Claude AI model won't be used for final military targeting decisions without human involvement, or for mass surveillance of Americans. The Pentagon has given the company until Friday evening to comply with unrestricted military access or face being labeled a "supply chain risk.

"

Pentagon threatens Defense Production Act as Friday deadline looms for Anthropic

If Anthropic doesn't budge, officials are considering invoking the Defense Production Act to force compliance—a measure typically reserved for foreign adversaries and wartime supply emergencies.

Anthropic has said it remains in "good-faith conversations" about its usage policy.Dean's public comments carry weight. He leads Google's AI research efforts, and Google itself holds a $200 million Pentagon AI contract alongside Anthropic, OpenAI, and Elon Musk's xAI. His willingness to publicly align with Anthropic's position—even indirectly—signals that the tension between Silicon Valley's safety-minded AI researchers and the Pentagon's push for unrestricted access isn't going away anytime soon.Nvidia CEO Jensen Huang, meanwhile, struck a more diplomatic tone, telling CNBC that both sides have "reasonable perspectives" and that even if a deal falls through, "it's not the end of the world."

Read Entire Article