Core (ID:[/b ][ b ][/b ][/b ][/b ][/b ][/b]
Back in 1968, Intel founder Robert
However, the name was first registered by a hotel chain, but the two founders changed to
Perhaps fate has led to a red line between Intel and the wave of intelligence 50 years later.
From data center, autopilot to the Internet of things, Intel is weaving a comprehensive artificial intelligence (AI) software and hardware giant network, trying to be invincible in the new generation of data revolution.
Global semiconductor revenue totaled $418.3 billion in 2019, down 11.9 percent from 2018, according to the latest report released on January 14 by U.S. market researcher Gartner.
At a time when the overall market is in the doldrums, intel has a pretty bright report card, not only returning to the top of the global semiconductor market after three years, but also record fourth-quarter and fiscal-year revenue.
Top 10 semiconductor manufacturers in 2019 (source: Gartner)
On January 24,2020, Intel reported its 2019 fiscal year earnings, with quarterly revenue topping the $20 billion mark and full-year revenue of nearly $720, and its revenue and profit far exceeded analysts'estimates, driving Intel's intraday share price up nearly 7 percent.
Changes of Intel stock price in the past five years
PC sales and the recovery in sales of corporate and cloud-computing data-center server chips are the two central pillars of Intel's bright earnings performance. Its data center division (dcg), a nascent force in intel's transition, welcomed five consecutive years of revenue growth to $23.481 billion.
In a conference call, Intel CEO Bob swan told analysts that a new server chip with better performance in AI and machine learning tasks is the key to Intel's strong performance, and customers continue to use Xeon processors as the basis for their data center workload injected into AI.
Intel CEO Si Ruibo
Mr. Si said 2019 was the best year in Intel's history and prospects. Intel recorded $3.8 billion in revenue driven by AI in 2019, and by 2024, AI market opportunities are expected to reach $25 billion, with $10 billion in AI chips applied to data centers.
So how much chips has Intel saved for itself in the layout of advanced AI chip architecture of data center so far?
In 2015, Intel proposed that data will change the future computing pattern, even the whole world.
This is also the beginning of AI's commercialization.
In the following five years, intel launched a series of nearly $20 billion-worth of investments to quickly cut into the multi-chip circuit, announcing to the world in a high-profile manner the resolve to transform the data dividend.
Intel's transformation achievements in the past year have been listed in the latest financial statements for fy2019.
Changes in revenue and net profit of Intel in 2015-2019
According to the financial report, Intel Q4's revenue was US $20.209 billion, a year-on-year increase of 8%; net profit was US $6.905 billion, a year-on-year increase of 33%; revenue in 2019 was US $71.965 billion, a year-on-year increase of 2%, and net profit was US $21.048 billion, basically the same as that in 2018.
Intel expects revenue to reach $73.5 billion in 2020, which also exceeds analysts' expectations.
Intel's revenue share changes in the 2015-2019 fiscal year
Other than Customer Computing (CCG)
Among them, data center business (DCG), as the revenue source next to customer computing business, continues to increase in the proportion of total revenue.
Driven by the strong demand of cloud service provider customers and the continuous strong driving force of the high-performance second generation Intel Xeon scalable processor, Q4 revenue of data center business reached $7.2 billion, a year-on-year increase of 19%, and the annual revenue reached $23.481 billion, a year-on-year increase of about 2%.
Intel's data center business (DCG) revenue and gross profit changes in 2015-2019
DCG includes a workload-optimization platform and related products designed for market segmentation of the cloud, enterprise and communications infrastructure, and AI is the key capability that these customers increasingly value.
AI demand drives the transition of data center computing from a single architecture to a heterogeneous system,
Among the mainstream coprocessors, GPU dominates the AI training market, CPU and field programmable gate array (FPGA) dominate the AI reasoning field, and ASIC is just like a rising star.
In order to meet the diverse workload requirements, Intel's chip business in the data center has covered scalar (CPU), vector (GPU), space (FPGA), matrix (ASIC).
These four computing architectures are also the four typical types of AI chips.
Intel data center AI multiple computing architecture chip layout
The strongest scalable processor is Intel
Although the CPU is what most people think
Today, Intel Xeon is still the mainstream choice of global AI reasoning. According to the data disclosed by Intel, Zhiqiang has helped 80% - 90% of AI reasoning in the market.
In 2017, Intel built its first AI acceleration capability in the first generation of Xeon scalable processors. In 2019, the second generation Zhiqiang was launched, with AI reasoning performance 30 times higher than that of the first generation. The third generation will be the strongest in 2020, and AI training performance will be improved by another 60%.
AI seems to be the biggest chip of Xeon. Navin Shenoy, executive vice president of Intel, pointed out during CES that Xeon is the only general CPU with built-in AI.
In the financial report conference call, sirobo also said that customers' demand for the second generation of Xeon Intel Xeon scalable processors is very strong. The cascade Lake series focused on AI performance is Intel's fastest-growing processor, and its growth momentum will be further promoted in the first half of 2020 with the launch of the third generation of Xeon scalable processor Cooper lake.
Although Intel is the king in the data center CPU market, with more than 90% market share, Intel dare not slack at all.
The pursuers, represented by AMD, IBM and Arm, are coming in.
AMD is one of the best semiconductor stocks in 2019, double the increase. Its newly released second generation 7Nm epyc Rome CPU outperforms Intel's 14nm CPU in terms of process, density, power, performance and price.
Intel's DCG revenue rose 4% year-on-year in the Q3 quarter of 2019, while server CPU sales fell 6%, though average selling price (ASP) rose 9%. On a month-on-month basis, DCG's revenue grew by 28 per cent, sales by 20 per cent, ASP by 7 per cent and DCG's operating profit margin by a new record 49 per cent.
Year on year (YoY) and QoQ changes in Intel server CPU sales
Different from Intel, due to the strong demand for the second generation of epyc processors, amd server CPU unit shipments and revenue increased by more than 50% month on month, making AMD's highest quarterly CPU revenue since 2006.
Amazon, IBM, Microsoft, Google, Tencent, twitter and other giants have announced the deployment of epyc processors in their data centers. According to the prediction of AMD CEO Su Zifeng, AMD is expected to achieve the goal of double-digit market share of server CPU by the middle of 2020, which will be about 10 times of the share of epyc before.
Amd server CPU market share changes
IBM is also a different ignored competitor.
In order to reverse the decline in data center, IBM has been trying to win more large-scale data center market by creating a hardware and software ecosystem around power processors in recent years.
Google, for example, launched servers specifically designed for AI and high-performance computing (HPC) at the end of 2017, powered by IBM Power9 processors. Its servers are well-established in the HPC, with IBM Power9 CPUs used in the top two supernumeraries in the latest global supernumeraries.
Shortly after the acquisition of red hat for us $34 billion in 2019, IBM took a more radical step and announced the full open source power instruction set to gain more opportunities in the middle and high-end server market.
Google is already dabbling in power processors. On January 14, Google also announced that it will launch IBM power system in Google cloud.
Updated in November 2019: top five global supercomputing rankings
Arm servers are trying to pry a gap in the middle-and-low-end market.
Before that, AMD, Samsung, Broadcom, NVIDIA, Qualcomm and other semiconductor giants had tried to develop arm server processor, but then all failed.
But over the past two years, the arm server cpu camp has welcomed both amazon and Huawei. As cloud-computers turn to self-research chips, Arm is seen as the preferred architecture for controlling CPU autonomy, and there seems to be hope of swallowing a small piece of server market cake.
But the enterprise nature of cloud computing firms makes it difficult for them to sell their self-developed server CPUs to the outside world, and it's hard to say how big an order of magnitude they really deploy internally, more like a cloud-computing firm that holds the hand for just in case and gets more bargaining space
In general, Intel's position as the leader of server CPU is still unshakable, but amd has become an unstable factor on the table. Whether Intel can keep its market share, the next step of 10nm server CPU will be crucial.
In 2010, Intel erased its Larrabee independent graphics card project from the company's roadmap until last November, when it first presented its GPGPU for use in data centers
The Ponte Vecchio GPU, designed for HPC modeling and simulation as well as AI training, is scheduled to debut in the U.S. Department of energy Aurora supercomputing.
The Ponte Vecchio GPU will use Intel's various advanced technologies, including its 7Nm process technology, folros 3D and emib packaging technology, high bandwidth memory, compute Express link interconnection, etc.
Intel is trying to develop graphics cards with the unified architecture Xe, which provides three kinds of microarchitectures, Xe LP, Xe HP and Xe HPC, respectively, for low, medium and high performance workloads. Recently, the independent graphics graphics card DG1 based on Xe LP PC was just revealed during ces.
Since AMD acquired ATI in 2006, the GPU market has been the stage for the eldest NVIDIA and second-largest AMD.
NVIDIA has been exploring new markets for GPU computing in various forms. It takes the lead in using GPU in the server market, and reaps the dividend of deep learning accelerated computing in the first time. With a strong CUDA ecology, NVIDIA has run across the AI and HPC fields, and is obviously in an advantageous position in the AI training field.
In order to enhance its competitiveness in the field of data centers, NVIDIA's proposed purchase of mellanox for us $6.9 billion is in the approval stage, which has been approved by us and European regulators. Next, it will be tested by Chinese regulators.
When AMD's CPU business was abused by Intel in the early days, it mainly relied on GPU business to maintain its livelihood. It is also the most powerful competitor of NVIDIA GPU.
In 2018, amd launched 7Nm GPU, radeon insight mi60 and mi50 for two Vega architectures, AI and HPC, and claimed that mi60 is the fastest double precision accelerator with performance up to 7.4 tflops.
However, AMD is always difficult to compete with GPU in AI and HPC fields. The main shortage is software. Most GPU developers are used to CUDA. NVIDIA does not open CUDA platform to AMD GPU, which makes it difficult for AMD to expand its own market in data center AI market.
Intel also realized the need to build unified software and built a unified programming model one API across its various computing architectures.
If Intel combines self-developed GPU with CPU, it may build a competitive computing platform. So whether it's data center or PC GPU market, it's expected to become RGB primary color competition in the future.
FPGA products are part of Intel's PSG division and, although not part of the DCG business, are also an important part of its data center portfolio.
As the ancestor of FPGA, Selings has always been a global FPGA
After that, sarinx and Altera went to the fork in two directions respectively.
Xilinx still takes FPGA as the main line and programmable as the core idea to explore innovative methods; Altera incarnates as the Intel PSG business unit, acting as an independent online accelerator on the one hand, and combining with Intel Xeon processor to attack emerging markets such as AI, 5g, automatic driving, etc.
Interestingly, since the second half of 2018, the FPGA's two core players have frequently been on some new moves
For example, when acquiring new start-ups, the news was released in July 2018. Intel PSG business unit welcomes new member easic, a US structured ASIC supplier. Five days later, China's AI chip Unicorn Shenjian technology was announced to be acquired by Xilinx.
When trying to play a new role in the old FPGA architecture, both of them choose to launch innovative new brands.
Celeste has built a name
Intel has also released a new brand of agilex FPGA. Like GPGPU in the data center, it has taken the path of integrating Intel's multiple innovative technologies. With the help of three-dimensional packaging, compute Express link memory consistency acceleration and other technologies, it has achieved ultra fast speed and ultra high flexibility.
The two also take turns flashing the biggest FPGAs. In August 2019, Cerings released what is claimed to be the world's largest FPGA chip, Virtex Ultra Scale VU19P. Two months later, Intel Stratix 10GX 10M FPGA used EMIB technology to integrate the two FPGAs logically and electrically, removing them
In addition, in the past year, Intel frequently mentioned the performance advantages and convenience brought by its software tools and libraries, and emphasized the value of its one API, a unified programming platform across architectures.
In consideration of the key role of high-quality software in the construction of ecology, Xilinx launched the unified software platform Vitis, which can automatically adjust Xilinx hardware architecture to software or algorithm code without hardware expertise, and package IP on FPGA, underlying driver software and a series of AI development kits to Vitis AI platform for developers to choose from.
Judging by Selings's 2020 financial year Q2 results released in October 2019, Cerings's transformation strategy has been effective, with advanced product revenue, represented by Alveo and UltraScale, up 29 percent year-on-year, continuing to climb, accounting for about 74 percent of total sales.
Its data center business increased significantly, reaching a record $81 million, a year-on-year increase of 24%, accounting for 10% of the total revenue. Its FAAS business with Microsoft azure, Amazon, Ali, Baidu, Huawei, Tencent and other important customers further expanded.
Changes in the proportion of advanced and core products in Selings (October 2018-October 2019)
Intel PSG mainly includes FPGA and structured ASIC, and related products used in communication, cloud, enterprise and embedded market. After three consecutive years of revenue and gross profit growth, PSG's revenue decreased by 6.4% in fy19f to USD 1.987 billion.
Changes in revenue and gross profit of Intel's PSG in 2016-2019
The rise of AI cloud computing has given Selings and Intel FPGA the same starting line.
At present, Intel has taken more actions to expand its ecosystem and attract Chinese developers, including setting up FPGA innovation center in Chongqing, holding FPGA innovation competition, etc., while Xilinx mainly relies on the Asia station of XDF developer conference at the end of the year as a platform for centralized display of advanced technology and communication with developers.
ASIC is a customized chip designed and manufactured for a specific purpose, task or application, which can complete very specific tasks with high performance and high efficiency.
Since Google unveiled its tensor processing unit (TPU), more and more players have flocked to the ASIC AI chip circuit. As early as August 2016, Intel purchased the cloud specific AI chip of nervana, a California AI chip maker, for about $300-400 million.
In the second half of 2019, Intel released new news about nervana's first commercial cloud AI training chip nnp-t and AI reasoning chip nnp-i, including putting into production and completing customer delivery, and announced to cooperate with Baidu and Facebook respectively to promote customized AI chip research and development.
Intel says the nervana NNP can scale almost linearly and efficiently. Nnp-t provides an efficient distributed training method, which can extend 95% of large-scale complex models linearly. The nnp-i developed by Intel and Facebook is said to have an efficiency of 4.8 tops / W and a power range of 10W to 50W on resnet50.
In December 2019, Intel announced the acquisition of Habana labs, an Israeli AI chip maker, for us $2 billion. The products created by Habana are also AI training and AI reasoning chips for data centers.
One stone arouses thousands of waves, and there are many discussions in the industry.
Some people think that this acquisition is just the continuous expansion of Intel's strategy, mainly because Intel is trying to meet the diversity of AI workload and enhance the strength of its data center AI product portfolio.
Some people think it's a hint that Intel is not satisfied with the previous AI chip acquisition. An industry insider said, Intel's current AI chip products are not powerful, NNP-I performance is still not up to the level of publicity.
Nervana and Habana are similar in name, architecture, and products are developed in Israel, and chips are highly scalable.
However, the acquisition background of the two start-ups is quite different.
When nervana joined Intel, there was only a team of 48 people and an idea, and no hardware development was brought in. That is to say, NNP series chips were polished three years after Intel.
In contrast, Habana's
Gabana's Gaudi supports remote direct memory access (RDMA). For data sharing across networks or accelerator structures, it can provide scalability that the chip could not achieve in the past, and it is more economical and efficient.
However, NVIDIA intends to acquire mellanox, an Israeli start-up company with a high price of $6.9 billion, and its core technology is RDMA.
The two start-ups also have a different home after they were acquired. Although Habana was three years later than Nervana
Obviously, Habana is already stealing Intel's aura from the top of nervana's head. The next step is to see how Intel can maintain the vitality of these two similar product lines at the same time.
We asked Xin Zhouyan, vice president of Intel's AI business unit, if he could provide suggestions on the selection of different types of coprocessors?
Julie Choi, vice president of Intel's AI business unit and general manager of Intel's AI platform and Market Research
According to her introduction, there is no AI hardware product that can guarantee the world. Intel will study the best choice with customers according to the actual needs.
FPGA is better at reasoning with low delay and high throughput,Customers who choose this infrastructure usually value the programmability of FPGA and want to configure the hardware. For example, Microsoft has made a lot of deep learning reasoning based on FPGA.
Nnp-i and nnp-t are mainly for large-scale cloud service providers,Facebook chooses nnp-i to deploy faster and more efficient reasoning calculation, and extends their support for advanced deep learning compiler glow to nnp-i.
andIndependent GPU is used in the field of supercomputing of high-performance computing in the early development goals, and also in the field of AI for large customers.
In her opinion, the introduction of nnp-t and Intel independent GPU will bring more alternative options for high-density neural network training to the market.
Looking at the current AI market, Intel CPU has a high voice in the reasoning market, and the new brand and ecology of FPGA are also in smooth development. However, competitors have shown a strong pursuit attitude, and Intel's nerves still need to be strained.
Although the latest financial results once again prove Intel's leading strength in PC and server chip market, in terms of the booming data center AI market, Intel has not shown a threat in the field of AI training, which depends on whether its independent GPU, nervana nnp-t and Habana Goya can win customers' confidence in the future.
The four types of architectures we focus on are just one of Intel's six strategies to accelerate the development of AI.
Intel will accelerate production at 10 nm, the 7 nm production plan will take place in 2021, and intel is already working on its 5 nm process technology, according to mr.
In addition, Intel supports advanced packaging technology of heterogeneous integration, open cooperative alliance CXL, EMI technology of accelerating multi chip communication interconnection, software stack of deep mining hardware performance and energy efficiency, cross architecture unified software platform oneapi of simplified programming, and innovation in security, memory and storage technology, all of which are in favor of AI workload efficiency, programmability and scalability Enhance the escort.
Data center AI chip players are divided into two groups, one group tends to be divided into two general cross industry solutions, the other group focuses on data center or edge reasoning in a specific field.
Intel has been very determined to continue to increase its firepower in both breadth and depth, not only providing comprehensive hardware and software options, but also striving to ensure that the linearity of each independent product can be high enough. In the future, as AI models become more complex and common, Intel's system integration strategy may also usher in a time of accumulation.