Google has unveiled an AI supercomputer that surpasses Nvidia’s systems in terms of both speed and efficiency.
Google has been using its Tensor Processing Unit (TPU) chip for AI purposes since 2016, with a focus on internal use.
The new AI supercomputer features Google’s TPU and can process large amounts of data faster than competing Nvidia systems.
Google recently announced details of one of its AI supercomputers, stating that it outperforms competing Nvidia systems in terms of both speed and efficiency.
Despite Nvidia's market share of over 90% in AI model training and deployment, Google has been developing and using its Tensor Processing Unit (TPU) chips since 2016. Google's TPU-based supercomputer, TPU v4, consists of over 4,000 TPUs linked with custom components to run and train AI models, making it the backbone of large language models. According to Google's researchers, TPU v4 is 1.2x-1.7x quicker and uses 1.3x-1.9x less energy than Nvidia A100.
However, Nvidia's CEO Jensen Huang claimed that the H100, Nvidia's latest AI chip, performs significantly better than its previous generation in an industry-wide AI chip test called MLperf. With the high computer power required for AI being costly, several industry players are working on developing new chips, components, or software techniques to lower the amount of computer power required.