ARTICLE AD BOX
![]()
The rivalry between tech titans Google and Nvidia is said to be heating up as the focus of the artificial intelligence (AI) boom shifts from teaching AI to actually using it to get answers quicker than ever.
While Nvidia CEO Jensen Huang has claimed his chips (GPUs) are more versatile than Google’s, Google is preparing a major counter-move.
At the Google Cloud Next conference this week, the search giant is expected to double down on its custom-made AI chips, known as Tensor Processing Units (TPUs), to meet a massive surge in demand, as per Bloomberg.
Training vs. inference is the new battleground
In the world of AI, there are two main phases: Training (teaching a model like ChatGPT to learn) and Inference (the AI actually answering your questions).While Nvidia’s chips are currently the "gold standard" for training, Google believes the future lies in specialized chips built specifically for inference. "It now becomes sensible to specialize chips more for training or more for inference workloads," said Google Chief Scientist Jeff Dean.Why it matters:Speed: Specialized inference chips can make chatbots and AI agents respond almost instantly.Scale: As more people use AI daily, companies need cheaper and more efficient ways to run these models at scale.
Customization: Unlike rivals who must buy off-the-shelf chips, Google designs its own, allowing its hardware and software teams to work hand-in-hand.Jensen Huang vs. Demis HassabisThe competition has sparked a war of words between the two companies' leaders.Nvidia’s Jensen Huang recently argued that his GPUs are superior because they can handle "a whole bunch of applications" that specialized TPUs simply cannot do.
Essentially, he argues that Nvidia chips are the "Swiss Army Knife" of the tech world.However, Google DeepMind CEO Demis Hassabis sees it differently. He noted that the world’s leading AI labs are increasingly desperate to get their hands on Google’s hardware. "A lot of people would like to run on both," Hassabis said, highlighting that interest in TPUs has reached an all-time high.[Image showing a comparison between Nvidia GPU and Google TPU architectures]Google’s "Infrastructure Advantage"Google has a decade-long head start in designing its own chips—a feat even OpenAI is only just beginning to attempt.
Analysts say this gives Google a "home-field advantage" as AI agents (programs that can perform complex tasks on a user's behalf) become the next big thing."The battleground is shifting towards inference," said Chirag Dekate, an analyst at Gartner. He noted that Google’s Gemini model is already among the fastest at complex reasoning, largely thanks to the infrastructure Google has built under the hood.What’s Next?While Nvidia recently spent a reported $20 billion to bolster its own inference technology, Google’s massive resources and firsthand experience with its own AI models make it a formidable threat.As the "AI spending boom" moves away from just building models and toward actually running them for millions of users, the choice of chip—Nvidia or Google—could determine which companies survive the next phase of the technological revolution.




English (US) ·