{Disarmed} It is as big as a tablet, but it is not one of them; is a monster deep learning chip with over 1.2 trillion transistors

It is as big as a tablet, but it is not one of them; is a monster deep learning chip with over 1.2 trillion transistors

Deep learning algorithms don't quite get along with general-purpose processors. Its high intrinsic parallelism fits much better with the architecture of graphics processors, which has caused many data centers and research labs dedicated to artificial intelligence to entrust the processing effort to a more or less ambitious GPU cluster .

However, this is not the only option. It is also possible to deploy an infrastructure of chips linked by high performance connections and specifically designed to deal with the intrinsic high parallelism of deep learning algorithms. One of the companies that has solutions of this type is Intel. Its Loihi neuromorphic chip is manufactured using 14 nm photolithography and incorporates 128 nuclei and just over 130,000 artificial neurons .

The Wafer Scale Engine chip developed by Cerebra incorporates 1.2 trillion transistors. And they are 1.2 billion of us, not of the Anglo-Saxons

IBM also has its own neuromorphic processor, a chip that its creators have called TrueNorth. It integrates 4096 cores, so that it is possible to connect several of them in a network in order to emulate, according to IBM , a system with a million neurons and 256 million synapses . Intel, IBM and NVIDIA are some of the big companies that are involved in the development of hardware designed specifically for artificial intelligence, but they are not the only ones that have a say in this area.

And it is that the Californian company Cerebras has developed a chip designed specifically for deep learning. The funny thing is that it looks relatively little like a conventional processor. It doesn't even look like the Intel or IBM hardware that I've briefly discussed in the previous paragraphs. As you can see in the cover photo of this article, it is much larger than a traditional chip . In fact, it seems likely that they are using a full wafer to produce each of them. However, this is by no means the only characteristic for which the Cerebras chip is so rare.

{"videoId":"x7zxisu","autoplay":false,"title":"Este algoritmo será capaz de clonar tu escritura"}

One big chip is better than many small ones, according to Cerebras

Here is another surprising fact about the Wafer Scale Engine ( WSE ) processor , which is what its creators call it: it integrates no less than 1.2 billion transistors . And that's 1.2 trillion of us, not Anglo-Saxons, so this number equates to a monstrous 1,200,000,000,000 transistors. It's certainly a figure that's hard not to be surprised by despite the sheer numbers of transistors found in chips we're all familiar with, like the CPUs and GPUs in our computers.

Time crystals: what they are, why they are so revolutionary and how quantum computers are already helping us achieve them In xiaomist.com Time crystals: what they are, why they are so revolutionary and how quantum computers are already helping us achieve them

This huge number of transistors responds to the approach that the engineers who have designed the WSE chip have chosen, which is very different from the design strategy that Intel or IBM have used in their own solutions. And it is that, always according to Cerebras , to optimize the execution of deep learning algorithms it is necessary to bet on a chip endowed with a very high intrinsic parallelism that is manifested through the packaging of a huge number of cores. This is why the WSE chip incorporates a staggering 400,000 programmable cores .

Wsememory This scheme clearly reflects that Cerebras engineers have opted to distribute the memory around the 400,000 cores of the WSE chip to minimize latency and increase its overall performance.

However, not the entire surface of the chip is dedicated to the process cores, of course. Another subsystem that also monopolizes an important part of logic is memory. Placing it close to the cores reduces latency, increases performance and minimizes consumption in a perceptible way, again according to Cerebras. And it makes sense. The WSE chip integrates 18 GB of memory .

And, to conclude, two more figures that strengthen the ambition of this processor: the memory bandwidth is close to 9.6 Petabytes, and the 400,000 cores communicate with the outside through a link with a speed of transfer of 100 Petabits per second . They are monstrous figures that are very far from those handled by the processors of our computers. Of course, we must not lose sight of the fact that WSE chips are not good for anything. Theirs is deep learning.

Images | Cerebra

More information | Cerebra

The news It is as big as a tablet, but it is not one of them; It is a monstrous chip for deep learning with more than 1.2 billion transistors. It was originally published in xiaomist.com by Juan Carlos López .

Web Bug from http://feeds.feedburner.com/~r/xiaomist2/~4/D36-NsGCtHQ


Popular Questions This Week

What is VoLTE and how can you activate it on your Xiaomi

How to change the font in MIUI and thus further customize your Xiaomi: so you can change the type, color and size of the letters of MIUI

How to exit the FASTBOOT mode of your Xiaomi if you have entered accidentally

Does your Xiaomi charge slowly or intermittently? So you can fix it

Problems with Android Auto and your Xiaomi? So you can fix it

If your Xiaomi disconnects only from the WiFi it may be because of that MIUI setting

The Redmi Note 9 Pro also seems to have problems in its call speaker

Xiaomi phones compatible with MHL: what you should know before buying a USB C to HDMI cable

How to convert your Xiaomi microSD card into internal memory

How to calibrate the proximity sensor of your Xiaomi