OpenAI launches new open-source AI models gpt-oss-120b and gpt-oss-20b; CEO Sam Altman says ‘this release will…’

2 hours ago 3
ARTICLE AD BOX

OpenAI launches new open-source AI models gpt-oss-120b and gpt-oss-20b; CEO Sam Altman says ‘this release will…’

OpenAI has rolled out two new open-weight AI models,

gpt-oss-120b

and

gpt-oss-20b

, to help developers run advanced AI systems at lower costs. For those unaware, an open-weight model is an AI model where the trained model weights (parameters) are publicly released by the company, allowing anyone – developers, researchers – to download, run, modify and fine-tune these models as per their own infrastructure. Announcing the new models via an X post, CEO Sam Altman said “gpt-oss is out! We made an open model that performs at the level of o4-mini and runs on a high-end laptop (WTF!!) (and a smaller one that runs on a phone). super proud of the team; big triumph of technology.”In another post, Altman wrote: “We believe in individual empowerment. Although we believe most people will want to use a convenient service like ChatGPT, people should be able to directly control and modify their own AI when they need to, and the privacy benefits are obvious.”He added: “we are quite hopeful that this release will enable new kinds of research and the creation of new kinds of products. We expect a meaningful uptick in the rate of innovation in our field, and for many more people to do important work than were able to before.” OpenAI’s gpt-oss-120b and gpt-oss-20b models are available under the

Apache 2.0 license

. The larger model, gpt-oss-120b, is claimed to match the performance of OpenAI's proprietary o4-mini model, while the smaller gpt-oss-20b is designed for use on devices with limited hardware. Both models are optimized for reasoning tasks and can be used for building chatbots, coding assistance, and other AI-powered tools.

OpenAI’s gpt-oss-120b and gpt-oss-20b: Details

The gpt-oss-120b model, OpenAI says can run on a single 80GB GPU, while gpt-oss-20b is designed for devices with just 16GB of memory. OpenAI says both models are capable of reasoning, function calling, and tool use. They can also adjust reasoning efforts depending on the task to balance speed and performance.The company said that it has conducted safety training and evaluations for these models, using its internal safety standards. An adversarial fine-tuned version of gpt-oss-120b was also tested to ensure it meets OpenAI’s Preparedness Framework for safe deployment of AI systems, it added.OpenAI says that the models were trained on large text datasets focusing on coding, STEM subjects, and general knowledge. OpenAI worked with early partners like AI Sweden and Orange to test the models for tasks such as secure on-premise deployments.Both models use a Transformer architecture with advanced techniques for memory efficiency. The larger model has 117 billion parameters, while the smaller has 21 billion.

iOS 26 Public Beta Is Here: Apple’s Biggest Redesign Since iOS 7

Read Entire Article