Home > News content

Integrated 1.2 trillion transistors! American AI chip "black horse" kills, develops the biggest chip in history

via:博客园     time:2019/8/20 9:52:48     readed:438


Source: Cerebras Systems

According to several foreign media reports on August 19, Cerebras Systems, the American AI chip startup, launched the largest chip ever.This one is calledThe Cerebras Wafer Scale Engine

In chip history, Intel's first 4004 processor had only 2,300 transistors in 1971, while the latest advanced microprocessor had only 32 billion transistors. Samsung has also made a flash memory chip (eUFS chip) with 2 trillion transistors, but it is not suitable for AI computing.

WSE, the record-breaking largest chip, was created for AI computing.

Data show thatThis 42,225 square millimeter chip has 400,000 cores.These cores are connected through a fine-grained, all-hardware intra-chip network, providing 100 PB of total bandwidth per second. More cores, more local memory and low latency high bandwidth structures create the best architecture for accelerating AI work.WSE is 56.7 times larger than the largest GPU and has an on-chip SRAM of 18 GB.

In fact, most of today's chips are multi-chip integration based on 12-inch silicon chips. But Cerebras Systems's chip is a transistor that makes interconnected independent chips on a single-crystal silicon wafer. Its interconnected design allows all transistors to operate at high speed as a whole.


Source: Cerebras Systems

It is generally explained that this product is completely high achiever in the computer, more than computing power and storage bandwidth, sorry, other people's level is still a new word.

Such a powerful ability comes from the 1.2 trillion transistors on its chip, and in 1971 Intel's 4004 processor had only 2300 transistors, which is calculated according to Moore's Law:hisChip architecture design and chip interconnection and communication schemes are also very advanced, making the synergy between 12 trillion transistors very synchronous, delay up to nanosecond level, running, these 12 trillion transistors like a transistor synchronization.


Source: Twitter

In the field of artificial intelligence, the size of the chip is very important. Because large chips process information faster, it takes shorter time to produce answers. Reduce the time of observation, oTherefore, reducing training time will eliminate a major bottleneck in the progress of the entire industry.

Of course, there's also a reason why chip manufacturers don't usually produce large chips. On a single wafer, impurities usually occur in the manufacturing process. A little impurity can lead to chip failure, and even severely break down several chips. If only one chip is made on a single wafer, the possibility of impurities is 100%. Impurities will invalidate the chip.However, Cerebras Systems chips are designed to leave enough room to ensure that one or a few impurities do not invalidate the entire chip.


Andrew Feldman (Source: Dean Takahashi)

Cerebras Systems CEO Feldman said in a statement that

These performance improvements are achieved by accelerating all the elements of neural network training. Neural network is a multi-level computational feedback loop. The faster the input passes through the loop, the faster the loop learns o

In terms of communication architecture, because of the use of relay processors in WSE, cluster communication architecture breaks through the bandwidth and delay problems caused by some power consumption in traditional communication technology.By using a two-dimensional array structure to connect 400,000 WSE-based processors, the cluster architecture achieves the characteristics of low latency and high bandwidth. Its overall bandwidth can be as high as 100 beats per second (1017 bytes per second). Even if no additional software is installed, such a cluster structure can support global information processing, and the received information can be processed by the corresponding processor.


Source: Cerebras Systems

For this product, mass production and heat dissipation may be the main challenges it faces.However, the emergence of WSE, its own bright spot is enough.

Linley Gwncap, chief analyst at Linley Group, said in a statement:

Jim McGregor, chief analyst and founder of Tirias Research, said in a statement:


Source: Cerebras Systems

Cerebras Systems was founded in 2016. Since its inception, Cerebras Systems has been playing a very mysterious and low-key role in the industry, focusing on providing chip products for training data centers.Was rated by CB Insights.

The company's founding team background strength is also very strong.Co-founder and CEO Andrew Feldman, who founded SeaMicro, a chip company, was acquired by AMD for $334 million in 2012.After SeaMicro was acquired by AMD, most of the former team members entered AMD to continue their work. So when Andrew Feldman waved his banner to continue his business, many of his old colleagues chose to follow him. Most of the other major team members went out with the founder, Andrew Feldman.

One of the noteworthy figures is Gary Lauterbach. In the 1990s, when Sun was in the ascendant, Gary Lauterbach was a senior chip designer of Sun, and later came to SeaMicro to design low-power servers. It can be said that the company had accumulated a large number of veterans of low-power chip designs at the beginning of its inception, which is for ordinary start-ups. No doubt we won at the starting line.

Then, in 2018, another heavyweight joined Cerebras Systems, where Dhiraj Mallick, former vice president of Intel's architecture and chief technology officer of data center, formally became Vice President of engineering and business. During his tenure at Intel, his revenue in the second quarter of 2018 increased by $1 billion from a year earlier, raising the company's data center revenue to $10 billion in the first half of 2018 alone. He is a recognized technological and business wizard. He is also an old colleague of Andrew Feldman in SeaMicro and AMD. Now the company has 194 employees.

Cerebras Systems has a long way to go, but it's not hard to imagine.AI is bringing a wave of innovation in computer architecture and chip packaging technology. We can expect that we will witness the birth of more interesting and even unexpected AI chips.

Reference resources:


China IT News APP

Download China IT News APP

Please rate this news

The average score will be displayed after you score.

Post comment

Do not see clearly? Click for a new code.

User comments