Concise, side policy, one from the concave templeQuotation Report Public Number QbitAI
Just now, Huawei's industry's most powerful AI chip is officially commercial.
And announced the self-developed AI frameworkMindSporeOpen source, directly against the two mainstream frameworks of the industry — — Google's Tensor Flow, Facebook's Pytorch.
The Huawei AI chip upgrade 910 has been released before, and is now officially commercialized. The benchmarked Weida Tesla V100, which focuses on deep learning, has a performance of 2 times that of NVIDIA.
Xu Zhijun, chairman of Huawei's rotating company, said that this is a practical manifestation of Huawei's full-stack AI strategy, and hopes to further realize Huawei's new vision: to create a smart world of Internet of Everything.
However, there is no doubt that Huawei's entry into the AI computing framework will definitely further affect the basic technology and architecture of AI, especially the monopoly of US companies.
After the release of MindSpore, Huawei has implemented a complete AI ecosystem chain, plus the previously released ModelArts development platform, Atlas computing platform, from the chip, framework, deployment platform to the full level of application products.
In the current environment, these actions are also self-reliant and unobtrusive.
Nowadays, the key technologies in the AI field, such as computing power, framework, algorithms, etc., are mainly provided by a few American companies.
For example, the training chip is mainly provided by GPU and Google (TPU); the framework is led by Google's Tensor Flow and Facebook's Pytorch; the invention of the original AI algorithm is only in the hands of a few manufacturers or research institutions.
This directly led some companies to intervene in AI, and found that the threshold is very high, in addition to the need for a large amount of data, but also need to face the problem of scarce computing power, expensive hardware, difficult to find talent.
Now, Huawei needs to change the status quo with practical actions.
AI field & nbspquo; Hong Meng OS”
MindSpore, unlike other mainstream frameworks, is a full-scale AI computing framework and an “operating platform”.
It can be used not only in cloud computing scenarios, but also in terminal and edge computing scenarios.
It's not just a reasoning (deployment) framework, it can also be used to train models.
Xu Zhijun said that behind this can achieve a unified architecture, a training, deployment everywhere, can lower the deployment threshold.
From this perspective, MindSpore can also be regarded as the AI field "Hong Meng OS".
In addition, this framework is not only for developers, but also for domain experts, mathematicians, algorithmic experts, etc., who are increasingly important in AI.
According to Xu Zhijun, MindSpore's interface is also more friendly. It is more convenient when expressing the equations for solving AI problems. It is easier to open and innovate algorithms and promote the popularity of AI applications.
With MindSpore, the core code size can be reduced by 20%, the development threshold is greatly reduced, and the overall efficiency is increased by more than 50%.
Through the MindSpore framework's own technological innovation and its synergy with the ascending processor, it effectively overcomes the complexity of AI calculation and the diversity of computing power, achieving efficient operation and greatly improving computing performance.
In addition to the upgrade processor, MindSpore also supports other processors such as GPUs and CPUs.
At the same time, MindSpore also uses the new AI programming language, which can be run in a distributed manner and is a full-scenario framework. The full scenario means that MindSpore can be deployed in environments including public clouds, private clouds, various edge computing, IoT industry terminals, and consumer terminals.
Moreover, this framework will be open source and open, with the flexibility to extend third-party frameworks and chip platforms.
Of course, Xu Zhijun said that if Huawei's rising series of chips are used, the effect will be better, and the full offline mode operation can be performed to fully utilize the neural network chip computing power to achieve the best performance matching.
After all, MindSpore is the core step in Huawei's full-stack all-scenario AI solution. It is the first Ascend Native open source AI computing framework, which is more suitable for DaVinci architecture AI chips, especially the Shengteng 910.
And MindSpore has made more optimizations for the growing training model, so users don't need to understand the details of parallel computing. Just understand the single-chip deployment and run parallel computing on the compute cluster.
Xu Zhijun said that MindSpore will be officially open source in the first quarter of next year.
Shengteng 910 officially commercial
The rising 910 was exposed during the Huawei Connected Conference in October 2018, using Huawei's self-developed DaVinci architecture, known as “ld”;The most powerful AI processor& rdquo;, using the 7nm process, the maximum power consumption is 350W, measured 310W.
The release is for commercial use, directly targeting the Invista Tesla V100, the main training program for deep learning, the main customers for AI data scientists and engineers.
The main performance data is as follows:
Half precision is (FP 16): 256 Tera FLOPS;
Integer Precision (INT 8): 512 Tera FLOPS, 128-Channel Full HD Video Decoder - H.264/265.
At the All-Connected Conference last year, Huawei compared it with friends. The players in the battle include Google TPU v2, Google TPU v3, NVIDIA V100 and Huawei's Rising 910.
“It can reach 256TFLOPS, which is 1 times higher than the NVIDIA V100! ”
At the same power consumption, the 910 is twice as powerful as the V100, and the training speed is faster, and the time required for the user to get the training output is shorter. In the typical case, compared to the V100, the calculation speed of the Shengteng 910 can be increased by 50%-100%.
In the typical ResNet50 network training, the Ascendant 910 works with MindSpore to achieve nearly 2x performance improvement over existing mainstream training cards in conjunction with TensorFlow.
And Xu Zhijun clearly stated after the meeting: the price is still not fixed, but certainly not high!
Huawei AI progress under the global situation
In October 2018, at the Huawei Connected Conference, Xu Zhijun announced Huawei's full-stack full-scenario AI strategic plan, including data acquisition, training, deployment and other aspects in its own framework, the main purpose is to improve efficiency and let AI Application development is easier and more convenient.
The full scenarios include: Consumer Device, Public Cloud, Private Cloud, Edge Computing, and Industrial IoT Device.
The focus is on the full stack, including the DaVinci architecture-based Proton series chips (Max, Lite, Mini, Tiny, Nano), the highly automated operator development tools CANN, the MindSpore framework, and the machine learning PaaS (Platform as a Service) ModelArts.
With the official commercial launch of the 910 and the official launch of the MindSpore framework, Huawei's full-stack AI solution is becoming more sophisticated and its competitiveness will increase.
Moreover, Huawei's AI is not only about Huawei's own business, but also should be examined from a more macro perspective.
At the moment, AI landing has become an undisputed general trend, the general direction.
However, in the context of growing Sino-US relations, how much China has caused more attention.
Recently, Nature recently published an article titled “Will China lead the world in AI by 2030?”, and also raised the status quo of China's AI development.
According to data from the Allen Institute of Artificial Intelligence, the proportion of Chinese authors in the top 10% of high-cited papers has reached 26.5% in 2018, which is very close to 29% in the United States. If this trend continues, China will surpass the United States this year.
Need a scene? data? money? Talent? Wait, these are not bad.
But why, the card neck is worrying, the AI field still exists.
The core is also in computing power (chip) and basic technology.
The Nature article points out that China still lags behind in the core technology tools of artificial intelligence. The open source AI platforms TensorFlow and Caffe, which are widely used in industry and academia worldwide, are developed by US companies and organizations.
In terms of the framework, Baidu's PaddlePaddle is also constantly breaking through. Although the development momentum is very good, it still seems to be a single book.
More critically, China's backwardness in AI hardware is very obvious. Most of the world's leading AI semiconductor chips are manufactured by US companies such as NVIDIA, Intel, Google and AMD.
Zheng Nanning, academician of the Chinese Academy of Engineering and director of the Institute of Artificial Intelligence and Robotics at Xi'an Jiaotong University, said in an interview with Nature: “We also lack expertise in designing computing chips that support advanced AI systems. ”
Although there are many companies in China, such as Ali, Baidu, Yitu, Horizon, etc., they are involved in the field of AI chips, but most of them focus on terminal SoC and reasoning. The large computing chips used for training are not. many.
Zheng Nanning expects that China may take five to 10 years to reach the level of innovation in basic theories and algorithms in the United States and the United Kingdom, but China will achieve this goal.
Kristin Shi-Kupfer, a political scientist from the Berlin think tank, also said that basic theoretical and technical contributions will be key to China's long-term AI goals.
She also stressed that if there is no real breakthrough in machine learning, then China's growth in the field of artificial intelligence will face the upper limit of development.
So, the question of Nature: China AI, can it lead the world by 2030?
Today Huawei gave a solution, but everything is only the beginning.
What do you think?