When we talk about autonomous driving, what role will artificial intelligence play in it?
Recently, in a research report on “artificial intelligence” released by IHS in the United States , analysts expect that by 2025, the number of AI artificial intelligence systems in cars will increase from 7 million in 2015 to 122 million; at the same time; the new car placement rate of related systems based on AI technology (most of which are based on voice recognition functions) will increase from 8% in 2015 to 109% in 2025; in addition, IHS pointed out that many cars will be installed in the future with different functional artificial intelligence system.
It is worth mentioning that in the design of the HMI human-computer interaction interface in the car, IHS believes that artificial intelligence AI technology will play a major role in the realization of functions such as voice/gesture recognition, eye tracking, driver monitoring and natural language interaction; As far as driverless cars are concerned, the existence of AI can improve the recognition accuracy of the machine vision system, and it will also play an important role in the ECU that controls sensor fusion.
In an interview with the media, Luca De Ambroggi, senior analyst of IHS Automotive Semiconductor Division, said, “Artificial intelligence has always been considered to be a key technology that can realize the commercialization of driverless cars. So this is also exciting for the entire automotive supply chain”.
However, Che Yunjun has been thinking about this question very seriously: How are we preparing for the day when we need to rely on AI technology to control driverless cars?
When asked “Has an AI algorithm that can solve complex traffic problems have been developed?”, De Ambroggi pointed out very bluntly, “We have not yet reached this point. Our current research results are still very limited.” But the fact is, artificial intelligence technology is developing rapidly. He also said, “In the next 10 years, the progress and iteration of AI technology will show a steady trend. The auto industry will benefit a lot from it.”
In the discussion between De Ambroggi and the media, he made an in-depth analysis of the application prospects of automotive artificial intelligence technology, including the current cutting-edge results of AI technology, application examples of in-car scenes, and hardware systems that could run AI algorithms. Of course, De Ambroggi also mentioned the question of “how to rate AI in the future”. It’s like we need to have a driver’s license to drive. If AI controls a driverless car, then it will naturally need to have the corresponding qualifications.
The following content is taken from the interview record of Luca De Ambroggi, senior analyst of IHS Automotive Semiconductor Division. I hope it will be of reference for everyone to understand the research and development and application of automotive AI technology. (Q=Reporter, A=Luca De Ambroggi)
Q: In terms of the applicability of AI in the car, when do you think it breaks the technical threshold?
A: In my impression, it should be at the beginning of 2015 that companies including Microsoft, Baidu, and Google admitted that “machines are now better than humans in recognizing objects.”
Q: What are the recent technological evolutions of artificial intelligence?
A: First of all, the object of machine learning can now be a huge database. In the past, we had to teach machine learning with a set of limited data, which led to a lot of time consuming machine learning; secondly, there are now hardware products that can run AI applications, such as inference systems. This means that the process from inference-recognition of objects to deduction of results can be automated, and this process is fast.
Q: Now, what kind of hardware can better realize the application of AI in driverless cars?
A: I think GPU is currently the most suitable hardware for large-scale applications of artificial intelligence in automobiles. So far, NVIDIA is the only company that provides AI system development/testing software and hardware solutions.
Q: Do you think NVIDIA’s Drive PX2 will be an ideal artificial intelligence platform for driverless cars in the future?
A: From the perspective of developing driverless cars, yes. But from the perspective of mass production, Drive PX2 is not the most ideal AI platform for driverless cars. Unless Nvidia launches next-generation products, such as Drive PX3? Its energy consumption can achieve 50W, instead of the current 250W of Drive PX2.
Q: AI certainly cannot only distinguish humans from animals. What else can it do?
A: AI can distinguish more than one object. More importantly, AI can give the semantic background in which it detects things. What it sees is a pattern and everything around the object. For example, it can recognize that an object crossing a road is a person, and he is playing with a mobile phone with his head down.
Q: So is there any difference between traditional machine vision technology and artificial intelligence AI?
A: The emergence of artificial intelligence AI has created a polarized and tense situation in the traditional automotive industry because it can do much more than standard machine vision. What we call computer vision today mainly relies on the histogram of oriented gradients (HOG) algorithm for object detection. It can be said that 95%~99% of the visual recognition completed by the Mobileye EyeQ series chips are based on the HOG algorithm.
However, whether it is a technology company, a tier 1 supplier or an OEM OEM, they all hope to use the power of AI to achieve the purpose of continuous learning of the same system. Because if you need to continuously develop new systems, then the chip and software adjustments have to be restarted from scratch. This process is very painful. In contrast, everyone hopes that their system has the ability to learn independently under limited software and hardware conditions.
Note: Histogram of Oriented Gradient (HOG) feature is a feature descriptor used for object detection in computer vision and image processing. It composes features by calculating and counting the histogram of the gradient direction of the local area of the image. Hog feature combined with SVM classifier has been widely used in image recognition, especially in pedestrian detection. It should be reminded that the method of HOG+SVM for pedestrian detection was proposed by French researcher Dalal in CVPR in 2005. Although there are many pedestrian detection algorithms that are constantly being proposed, they are basically based on the idea of HOG+SVM.
Sensor fusion, can AI also play a role?
Q: On top of visual recognition, what other applications can artificial intelligence be used in the car?
A: The camera display is the first place where AI makes a big splash. But I can tell you for sure that there are already tier 1 suppliers and OEM manufacturers in Europe that are studying how to apply artificial intelligence to radar, and AI will also play an important role in sensor fusion.
Q: Just a few months ago, Toyota announced that it would invest US$1 billion for Toyota Researchers based in Silicon Valley for five-year artificial intelligence research. At the same time, Toyota also said that it will first build two laboratories in areas close to Stanford University and Massachusetts Institute of Technology.
Therefore, technology companies including GE, Baidu, Samsung, and major automakers have established R&D centers of different sizes in Silicon Valley or in areas close to Silicon Valley, in order to use the power of emerging technologies to make further product breakthroughs. This is in the industry. It’s no secret anymore. So where are we now in AI technological breakthroughs?
A: Actually, it is still in the research stage of universities and research institutions. However, I expect that in the next 5 to 10 years, or more accurately, in the next ten years, we can see some AI research results that meet engineering requirements and can be commercialized.
How to certify and rate AI?
Q: Suppose we have arrived 10 years later, when AI will play an important role in controlling driverless cars. At this time, what challenges will we face?
A: One of the main challenges, I think, is the certification and rating of AI. Just as we require drivers who drive on the road to have a driver’s license, the automotive industry needs a set of standards or procedures to ensure that the use of artificial intelligence is safe.
Q: If you put it this way, you definitely need to know what the AI system should test? I think that if Google’s self-driving cars are only driven for a few hours a day in California and Texas, there is no guarantee that self-driving cars are safe.
A: Tier 1 suppliers and OEM OEMs need to jointly develop a unified safety test and pass standard parameter settings to authenticate and rate the artificial intelligence system. This is actually not a simple matter. As far as I know, no industry alliance or ISO organization is currently conducting research on related topics.
AI is a complex issue. I know that almost all tier 1 suppliers and car companies are conducting artificial intelligence research. Many of them have stated on different occasions that they hope to start some relatively small-scale AI research now, so that when the external environment matures in the future, they will not fall behind in artificial intelligence applications.
The big waves wash away the sand, the clouds see the fog
Q: As far as I know, when it comes to the application of artificial intelligence AI on car chips, there is too much “dishonesty” in it. For example, you mentioned at the beginning that Nvidia’s GPU-based AI solutions for the automotive market. You also said that their platforms are more for testing artificial intelligence, not for mass production. Play a role. Are there other technology providers like Nvidia?
A: For example, Tesla. When launching the Autopilot function, Tesla claimed that this system uses Mobileye’s EyeQ3 chip, which contains the application of artificial intelligence algorithms. But I don’t think this can be called “deep learning”.
In my opinion, if Audi launches an AI-based driver assistance system on the 2017 Q7, like Tesla, he is fully equipped with this capability, but I bet that Audi will not do so until at least 2017. At present, we are not sure whether this model uses Mobileye’s Q3 or Q4 chip, and how much artificial intelligence will play in the realization of various functions.
There are also many chip suppliers and car companies claiming that they are conducting artificial intelligence research, but for true deep learning applications, it must be supported by processors with super computing power.
Except for Nvidia and Mobileye, we suspect that NXP is “secretly” researching artificial intelligence. Xilinx, Altera, and Intel have invested a lot in AI. IP vendors such as Ceva, Synopsys, Cadence, Mentor Graphics, etc., have all been involved in artificial intelligence. In fact, just look at the recent acquisitions/mergers and acquisitions in the industry. It is not difficult to understand that the technical competition for automotive artificial intelligence AI has actually just begun.
Note: In the past two months, Intel has successively acquired two companies related to ADAS, robotics, and automated machinery. In April of this year, Intel acquired Yogitech, which specializes in semiconductor functional safety and related standards research; in May, Intel announced the acquisition of Itseez. Itseez has established its own position in the field of computer vision algorithms and embedded hardware development.
Che Yun – Summary
Undoubtedly, driverless cars are definitely an important direction for future car evolution, and artificial intelligence AI will play an indispensable role in it. At that time, the car was not only a means of transportation, it was also a caring robot that understood you, providing all kinds of convenience and fun for traveling. However, most of the current applications of AI in automobiles focus on the realization of functions such as voice/gesture, and cannot replace the role of humans.
It is foreseeable that in the next 10 years, players including tier 1 suppliers, OEMs, and technology companies will make efforts in the field of artificial intelligence. This will start from NXP BlueBox, Mobileye EyeQ5, NVIDIA Drive PX2 and other platforms in the first half of this year. The release can be seen. And the development of artificial intelligence platform for autonomous vehicles is bound to become the main battlefield of various competitions. However, it is certainly much more difficult for AI to expand the market in the automotive field than in the consumer electronics market. After all, “safety” is the final measurement factor, and iterative technology upgrades, the soundness of laws and regulations, and how to rate and certify AI systems are all urgent for relevant practitioners. They are all key issues to be resolved.