Is their still a way for Anthropic to work with the US government, FCC chief Brendan Carr answers

1 hour ago 4
ARTICLE AD BOX

Is their still a way for Anthropic to work with the US government, FCC chief Brendan Carr answers

Artificial Intelligence company Anthropic has been banned by the US government. Claude-maker Anthropic was blacklisted on Friday, February 28, after in tense negotiations, Anthropic sought specific restrictions on the use of its AI technology by the Department of Defense (DoD), which the agency did not agree to.

The startup reportedly asked Pentagon for assurance that its technology would not be used for fully autonomous weapons or domestic mass surveillance of Americans. The DoD wanted Anthropic to agree to let the military use the models across all lawful use cases. Talks ended last week and Anthropic CEO Dario Amodei said that the company “cannot in good conscience” allow the use of its models under these conditions.Speaking to CNBC, Federal Communications Commission Chairman Brendan Carr said that Anthropic “made a mistake” in its dealings with the DoD.

“I think it [Anthropic] probably made a mistake. There’s obviously rules of the road that are in place that are going to apply to every technology that the Department of War contracts with,” he said. When specifically asked if the door was still open for Anthropic to work with the American government, the FCC chairman Carr said that the company should "try to correct course as best they can.

" He added that the company was given lot of chnaces to improve, but wasted all.

“They were given lots of off ramps ... given lots of opportunities to find a great landing spot, and they chose not to do it and that’s a mistake for them,” Carr said.

Anthropic statement on getting Blacklisted by the US government

Just hour after Pentagon chief announced ban on Anthropic, the AI company said it was “saddened” by the move to blacklist it, saying it “would both be legally unsound and set a dangerous precedent for any American company that negotiates with the government.” The company wrote a long blog post as reponse, titled: Statement on the comments from Secretary of War Pete Hegseth. Here's the post:Earlier today, Secretary of War Pete Hegseth shared on X that he is directing the Department of War to designate Anthropic a supply chain risk. This action follows months of negotiations that reached an impasse over two exceptions we requested to the lawful use of our AI model, Claude: the mass domestic surveillance of Americans and fully autonomous weapons.We have not yet received direct communication from the Department of War or the White House on the status of our negotiations.We have tried in good faith to reach an agreement with the Department of War, making clear that we support all lawful uses of AI for national security aside from the two narrow exceptions above. To the best of our knowledge, these exceptions have not affected a single government mission to date.We held to our exceptions for two reasons. First, we do not believe that today’s frontier AI models are reliable enough to be used in fully autonomous weapons.

Allowing current models to be used in this way would endanger America’s warfighters and civilians. Second, we believe that mass domestic surveillance of Americans constitutes a violation of fundamental rights.Designating Anthropic as a supply chain risk would be an unprecedented action—one historically reserved for US adversaries, never before publicly applied to an American company. We are deeply saddened by these developments.

As the first frontier AI company to deploy models in the US government’s classified networks, Anthropic has supported American warfighters since June 2024 and has every intention of continuing to do so.We believe this designation would both be legally unsound and set a dangerous precedent for any American company that negotiates with the government.No amount of intimidation or punishment from the Department of War will change our position on mass domestic surveillance or fully autonomous weapons.

We will challenge any supply chain risk designation in court.What this means for our customersSecretary Hegseth has implied this designation would restrict anyone who does business with the military from doing business with Anthropic. The Secretary does not have the statutory authority to back up this statement. Legally, a supply chain risk designation under 10 USC 3252 can only extend to the use of Claude as part of Department of War contracts—it cannot affect how contractors use Claude to serve other customers.In practice, this means:If you are an individual customer or hold a commercial contract with Anthropic, your access to Claude—through our API, claude.ai, or any of our products—is completely unaffected.If you are a Department of War contractor, this designation—if formally adopted—would only affect your use of Claude on Department of War contract work. Your use for any other purpose is unaffected.Our sales and support teams are standing by to answer any questions you may have.We are deeply grateful to our users, and to the industry peers, policymakers, veterans, and members of the public who have voiced their support in recent days. Thank you. Above all else, our priorities are to protect our customers from any disruption caused by these extraordinary events and to work with the Department of War to ensure a smooth transition—for them, for our troops, and for American military operations.

Read Entire Article