ARTICLE AD BOX
Demis Hassabis
, the CEO of
Google DeepMind
, believes that a major obstacle preventing AI from reaching true
Artificial General Intelligence
(AGI) is a lack of consistency. While current models can perform incredibly complex tasks, they can also make simple, trivial errors that a human could easily avoid.
Problem with "Jagged" Intelligence
In a recent "Google for Developers" podcast episode, Hassabis explained that while advanced models like Google's Gemini, when enhanced with techniques like DeepThink, can win gold medals in prestigious math competitions, they can still struggle with basic high school math problems. He described this as "uneven" or "jagged" intelligence. This term, which Google CEO Sundar Pichai has also used, highlights how these systems can be highly skilled in some areas while being surprisingly weak in others.Hassabis stated that simply giving these models more data and computing power won't solve this problem. He said that to achieve AGI, we need to develop new capabilities in reasoning, planning, and memory. He also stressed the need for better testing and "new, harder benchmarks" to accurately assess what these models can and can't do.
The Big Race to AGI
Google and other tech giants like OpenAI are all aiming for AGI, the point at which AI can reason and perform like a human. However, current AI systems still face significant issues, including hallucinations, misinformation, and basic errors.OpenAI CEO Sam Altman has a similar view. Before the launch of GPT-5, he told reporters that while it was a big step forward, it still wasn't true AGI. Altman said one key missing element is the ability for the models to learn on their own, continuously improving as they are deployed.Both Hassabis and Altman agree that while impressive, today's AI models are not yet at the AGI threshold. The next big leaps in AI will likely come from solving these fundamental issues of consistency and independent learning, not just from scaling up what we already have.