Report claims Pentagon got first-ever access to OpenAI products via Microsoft, when ...

5 days ago 9
ARTICLE AD BOX

Report claims Pentagon got first-ever access to OpenAI products via Microsoft, when ...

OpenAI’s models were reportedly accessible to the US Department of War well before it signed an official deal for them. A recent report claims that the Pentagon gained access to OpenAI’s technology through Microsoft’s Azure OpenAI service in 2023, when it was known as the Department of Defence.

Now, what makes it more concerning is that, at the time, the ChatGPT maker’s usage policy barred the military from using its AI models.According to a Wired report, some OpenAI employees have discovered that the Pentagon began experimenting with Azure OpenAI, a version of OpenAI’s models available on Microsoft’s cloud platform, way before the department signed a deal with the company last week. This happened as Microsoft, which has long held contracts with the Pentagon and is OpenAI’s largest investor, also has broad rights to commercialise the startup’s technology.

The report comes as OpenAI CEO Sam Altman faces employee criticism following the company’s recent deal with the US military. The agreement was signed after a roughly $200 million Pentagon contract with Anthropic collapsed, prompting some staff to ask Altman for more details about the arrangement. Altman later said in a social media post that the situation looked "sloppy."Sources told Wired that the issue had already created confusion inside OpenAI in 2023.

Some employees recalled seeing Pentagon officials walking through the company’s San Francisco offices, while others questioned whether OpenAI’s restrictions applied to Microsoft’s Azure OpenAI products.Meanwhile, spokespersons for OpenAI and Microsoft have noted that Azure OpenAI services were never subject to OpenAI’s usage policies.

What Microsoft and OpenAI said about US military getting access to banned AI tools

In a statement to WIRED, Microsoft spokesperson Frank Shaw said that the company’s Azure OpenAI service became available to the US government in 2023 under the company’s own terms of service. “Microsoft has a product called the Azure OpenAI Service that became available to the US Government in 2023 and is subject to Microsoft's terms of service,” he said. However, the company declined to specify when the service was first made available to the United States Department of Defense but noted that it was not approved for “top secret” government workloads until 2025.OpenAI's spokesperson expressed the company's belief in the importance of continuing to participate in discussions about the use of AI in national security. “AI is already playing a significant role in national security and we believe it’s important to have a seat at the table to help ensure it’s deployed safely and responsibly,” OpenAI spokesperson Liz Bourgeois told in a statement to Wired. She added that the company had informed employees about the work and created channels for them to raise questions. “We've been transparent with our employees as we’ve approached this work, providing regular updates and dedicated channels where teams can ask questions and engage directly with our national security team,” Bourgeois added.OpenAI's attitude toward collaborating with the military has also changed over time. For instance, in January 2024, OpenAI employees learned of the organisation’s decision to remove its general prohibition on the military use of its technology, not through internal communications, but via a news article. In December 2024, OpenAI announced its partnership with Anduril to handle unclassified national security issues.OpenAI declined Palantir's approach to join its "FedStart" program, citing risk concerns, though it now works with Palantir in other capacities. This contrasted with Anthropic, which signed a deal with Palantir allowing its AI to be used for classified military work.The latest Pentagon deal has also created internal divisions. Some employees questioned whether the company's models were reliable enough for battlefield use, while others felt the Anduril partnership demonstrated a responsible approach. A current OpenAI researcher told Wired, "OpenAI's approach thus far has been 'measure twice, cut once' when it comes to broad classified deployments. Employees are engaged on the question of what approach to national security is in line with the mission."Outside observers raised concerns about the scope of the agreement. Charlie Bullock of the Institute for Law and AI noted that it may have enabled forms of legal surveillance, such as the purchase and analysis of Americans' data. OpenAI later amended the terms. Researcher Noam Brown acknowledged, "Over the weekend, it became clear that the original language in the OpenAI/DoW agreement left legitimate questions unanswered, especially around some novel ways that AI could potentially enable legal surveillance."Former OpenAI geopolitics head Sarah Shoker also wrote, "The biggest losers in all of this are everyday people and civilians in conflict zones. Our ability to understand the effects of military AI in war is and will be severely hindered due to layers of opacity caused by technical design and policy. It's black boxes all the way down."At a recent all-hands, the company’s CEO, Sam Altman, told employees that OpenAI does not control what the defense department does with its AI and expressed interest in selling models to NATO.

Read Entire Article