Industry teams including Cohere, LG AI Research (LG AI Research), Meta AI, and Salesforce Research have actually experienced Google’s machine learning center, and can set up an interactive development environment through the TPU VM architecture, and can be applied flexibly Machine learning frameworks such as JAX, PyTorch, or TensorFlow also use fast interconnection and optimized software stack features to create more outstanding performance and scalability.
After announcing the launch of the fourth-generation TPU accelerator last year, Google announced at Google I/O 2022 that 8 sets of Cloud TPU v4 Pod computing devices constitute the largest machine learning center, and it emphasizes the use of 90% carbon-free energy to drive.
In the previous description, Google stated that the computing performance of the fourth-generation TPU accelerator has doubled compared to the previous generation. At the same time, the POD computing device composed of 4096 sets of fourth-generation TPU accelerators can also correspond to more than 1 exaFLOPS of computing performance (ie 10 per second18secondary floating-point operations). The machine learning center composed of 8 sets of Cloud TPU v4 Pod computing devices this time can reach up to 9,000,000,000,000,000,000,000,000,000,000,000 teraflops of floating-point operations per second at the peak. It has become the world‘s largest public machine learning center, and will be served through Google Cloud Open to the public.
Industry teams including Cohere, LG AI Research (LG AI Research), Meta AI, and Salesforce Research have actually experienced Google’s machine learning center, and can set up an interactive development environment through the TPU VM architecture, and can be applied flexibly Machine learning frameworks such as JAX, PyTorch, or TensorFlow also use fast interconnection and optimized software stack features to create more outstanding performance and scalability.
The PaLM (Pathways Language Model) language model mentioned by Google this time at Google I/O 2022 is trained through 2 sets of TPU v4 Pods, so as to correspond to faster language translation and understanding.