Google, Mark Zuckerberg’s Meta may bring more ‘trouble’ for Nvidia after $250 billion ‘strike’

1 hour ago 4
ARTICLE AD BOX

Google, Mark Zuckerberg’s Meta may bring more ‘trouble’ for Nvidia after $250 billion ‘strike’

Google and Mark Zuckerberg’s Meta are reportedly joining forces on a secret software initiative that may further dent Nvidia’s artificial intelligence (AI) dominance. This comes days after a report claimed that one of the chipmaker’s biggest customers, Meta Platforms, is in advanced talks to spend billions on Google's competing Al chips, wiping roughly $250 billion from its market value.According to a report by news agency Reuters, both Google and Meta are said to be working on an initiative, known internally as “TorchTPU,” to make Google’s custom AI chips – Tensor Processing Units (TPUs) – compatible with PyTorch. PyTorch is the world’s most popular AI software framework and is currently the primary language used by developers to build and train AI models.

Google’s $250 billion ‘strike’ against Nvidia may not be the last

Nvidia has maintained a grip on the AI sector, thanks to its proprietary CUDA software layer that ensures PyTorch runs most efficiently on Nvidia GPUs.

For competitors, this has been a “software bottleneck” because even if they build a faster chip, developers are unwilling to bear the massive cost and time required to rewrite their code to fit non-Nvidia hardware, the report said.Google, on the other hand, optimises its TPUs for its internal framework, Jax, which essentially leaves external developers frustrated by the “switching costs” required to move away from Nvidia.

TorchTPU aims to eliminate this barrier by making Google hardware a “plug-and-play” alternative for the PyTorch ecosystem.Meta wants to diversify its own infrastructure, and for that, it is reportedly working closely with Google on the project. By helping Google optimise TPUs for PyTorch, Meta will be able to reduce its near-total reliance on Nvidia’s expensive and supply-constrained H100 and Blackwell chips.It will also help Meta lower “inference costs” (the cost of running AI models) by creating a competitive bidding war between hardware providers, and at the same time, gain access to Google’s TPU manufacturing capacity to power its own Llama models.Meanwhile, Google, which has shifted its business model by offering its TPUs to external cloud customers, is said to gain as it may trigger a mass migration of developers seeking cheaper and more available hardware than Nvidia's premium-priced GPUs.

Read Entire Article