In the hottest era of artificial intelligence, the

  • Detail

In the era of artificial intelligence, there is fierce competition in the field of chips. IBM said that 90% of the world's data were generated in the past two years, most of which were unstructured data. As the application of IOT has spawned more data from different sources, this trend will accelerate in the next few years. Therefore, the effectiveness of data analysis methods based on traditional rules has declined; In order to make better use of the explosive growth of data, new methods will be adopted (such as machine learning). Major chip and AI companies have joined the competition to cash in their unstructured data sets before competitors enter new markets

deep learning supply chain

gpu is the key to realize deep learning: two steps of deep learning: training and inference. The purpose of training network is to effectively set network weight (training); Use the network before training to infer the input content. The cost of the training part is higher, and the progress is slow and expensive, while the inference part lacks the flexibility to adapt to the input of new unknown content

"kerstan explained that the" training "and" inference "of deep learning

due to the computational nature of deep learning, deep learning has high requirements for parallel processing (especially in the training stage), so adding an accelerator to the CPU will greatly improve performance. At present, the main accelerators used are GPU and FPGA, both of which have good performance in parallel processing, so they have significant performance advantages over the processing capacity of CPU itself

gpu is booming with deep learning

NVIDIA this year is definitely a star in the chip field. Although its size and overall revenue are not as good as old guns such as Intel/Qualcomm, its potential has been recognized since the beginning of the GPU era. In the PC era, Intel occupied the leading position in the GPU market. With the advent of the mobile Internet era, the global GPU market has undergone earth shaking changes, and arm has gradually risen with the rapid development of mobile terminals. And NVIDIA, an independent GPU enterprise, is driven by the needs of artificial intelligence, automotive electronics, video and audio big data, VR and other fields, and its market value continues to hit new highs

market distribution of mainstream chips

gpu is called graphics processor, or visual processor. As the name suggests, the main application scenario of GPU is processing image display computing. The computer image display process is shown in the figure below. In this process, CPU determines the display content, while GPU determines the quality of the display. Chips that assist CPUs to complete specific functions such as GPU are collectively referred to as "coprocessors". The word "coprocessor" indicates that GPU is in a subordinate position in the computer system

the basic process of computer real images

GPU has a high parallel structure, so it has higher efficiency than CPU in processing graphic data and complex algorithms. Most of the CPU area is controller and register. Compared with GPU, GPU has more Alu (arithmetic logic unit: for data processing), rather than data cache and flow control. This structure is suitable for parallel processing of intensive data, so we can see that GPU achieves high floating-point performance under the condition of high parallelism and huge data scale

Description: GPU features a large amount of calculation, but there is no technical content, and it needs to be repeated many times. Just like you need to calculate hundreds of millions of additions, subtractions, multiplication and divisions within a hundred times for a job, the simple way is to hire dozens of primary school students to calculate together, one person to calculate a part. Anyway, these calculations are not technical, just manual work. GPU is like this, using many simple computing units to complete a large number of computing tasks, pure human sea tactics. This strategy is based on the premise that the work of pupil a and pupil B is independent of each other. Many problems involving a large number of calculations basically have this feature, such as password cracking, mining and many graphic calculations. These calculations can be decomposed into several simple tasks, and each task can be assigned to a primary school student. CPU is like an old professor, who can calculate both integral and differential. That is, the salary is high. An old professor is worth dozens of primary school students, but the old professor also has the ability of coordination, communication and management

GPU works together with CPU

in the early development of AI technology, GPU, as a ready-made parallel computing acceleration chip, was used in many projects, such as auto driving of cars, image recognition algorithms, etc., but GPU may not be the ultimate answer to AI acceleration hardware. GPU is limited to the initial design goal and cannot perfectly match the mainstream AI algorithms in both directions. In the future, with the large-scale commercialization of artificial intelligence technology, from the historical analogy of the past development of the industrial chain, the special artificial intelligence acceleration coprocessor will pose a challenge to transition schemes such as GPU. GPU, because of the difference between its original design matching computing model and neural network computing model, the communication architecture between its parallel computing cores NOC (Network on chip) has shortcomings in neural network computing

gpu is not the ultimate chip of artificial intelligence

it seems that GPU has dominated the accelerated computing of artificial intelligence. Then, will GPU also be responsible for the hardware acceleration of artificial intelligence in the future? This is not the case. There are already various competitive alternative solutions in the industry. Google emphasized in 2 lianxun securities that the TPU (tensor processing unit) special processor project was disclosed at the i/o conference held at the end of May 2016. Data shows that TPU has actually been used in the most comprehensive results output by Google's many business and scientific research projects for more than a year. The server cluster used in the man-machine war of go century, which defeated Li Shishi, used TPU to accelerate the calculation of DCNN (deep revolutionary neural network) in go. Google's rankbrain uses TPU to improve the relevance of search results and street view services

GPU and security and video intelligence

with the continuous progress of Internet technology, GPU has shown its advantages in the fields of artificial intelligence (image and speech recognition, no such standard has a great impact on human driving in the world, etc.), video processing, vr/ar, biochemistry, financial securities data, etc., and has broad application prospects in the short term

gpu development space

the development of deep learning technology makes the iceberg of the artificial intelligence industry melt into an irresistible torrent, impacting the industrial reform of the security industry. Many first-line manufacturers in the security industry work together with the world's top AI chip manufacturers to upgrade intelligent hardware products, and apply the cutting-edge graphics processor in the CV field to the research and development of new hardware products. At present, Haikang, Dahua, Yushi, Li, Keda, Kuangshi, Geling, Wen'an and other GPU products based on nvidia/movidius have been or will be integrated into security front-end products and back-end systems, and deep learning/artificial intelligence is gradually applied to security

Copyright © 2011 JIN SHI