The so-called Tensor Processing Units (TPUs) from Google were also developed specifically for neural networks: The chips, which were first presented in 2016, are intended to support and accelerate the use of an artificial neural network that has already been trained. “TPUs achieve this, for example, through lower precision compared to normal CPUs or GPUs and specialization in matrix operations,” says Gemmeke.
So-called Neuromorphic Processing Units (NPUs) go one step further in the direction of artificial intelligence. These use completely different commands and architectures than graphics chips: “While GPUs usually use matrices because they specialize in image processing, NPUs map to a certain extent synaptic computing, a combination of memory and computing operations,” says Christoph Kutter, Professor of Polytronic Systems of the University of the Federal Armed Forces in Munich and Director of the Fraunhofer Institute for Electronic Microsystems and Solid State Technologies (EMFT). “As a result, NPUs beat conventional graphics chips by far.”