At present, there are four obstacles affecting the realization of fully autonomous driving


liu, tempo Date: 2021-08-05 09:13:20 From:ozmca.com
Views:57 Reply:0

After reading our previous article, students can understand the classification of autonomous driving level in seconds. Students of the additional professional version of the article can know that the classification of autonomous driving SAE can be divided according to whether it can leave the feet, hands, eyes and brain. Obviously no one right now can say I can let you drive with my eyes closed, so none of the current commercial hype, including Tesla, is a L3.

 

None of the current commercial autonomous vehicles reaches L3

 

The AI that we currently rely on for autonomous driving is based on Deep Learning, which allows computers to learn from experience and understand the world in terms of a hierarchical system of concepts, each defined by its relationship to certain relatively simple concepts. The performance of these machine learning algorithms depends heavily on the representation of a given data, and deep learning expresses complex representations through other simpler representations.

 

Therefore, it can be seen from the above that autonomous driving relies on sensors to sense the environment and uses algorithms and chip processors to recognize the environment. As mentioned in our previous article “autonomous driving – Only a smart artificial intelligence is needed to replace human driving”, artificial intelligence is used to input control to the car to achieve autonomous driving.

 

The logic of autonomous driving seems simple, but in fact, according to the mainstream reasoning, fully autonomous driving in public places of passenger cars will be in the middle of this century. What are the main obstacles to the development of autonomous driving?

 

Super AI processor

 

As mentioned above, deep learning is mainly used in automatic driving. The core of deep learning is micro-differentiation of objects and then micro-differentiation to local and global matching recognition, which determines that it requires strong TOPS computing power. TOPS(Tera trillion” Operations Per Second)? Trillion operations per second. It is mainly the measurement of the maximum achievable throughput, and is the most intuitive evaluation of AI chip capability index.

 

The main processor types in the AI industry are as follows:

 

The CPU (Central processing unit) is a chip designed for general computing purposes, focusing on computing and logic control functions. They are strong at handling single complex sequential tasks, but poor at large-scale data calculations.

 

Gpus (Graphics processing units) were originally designed for image processing, but have been successfully used for AI. Gpus contain thousands of cores and can process thousands of threads simultaneously. This parallel computing design makes gpus extremely powerful in large-scale data computing.

 

Fpgas (Field Programmable Gate Array) are programmable logic chips. This type of processor is powerful for handling small but intensive data access. In addition, FPGA chips allow users to program circuit paths through their tiny logic blocks to handle any kind of digital function.

 

Asics (application-specific integrated circuits) are highly customized chips designed specifically to provide superior performance in a specific application. However, once a custom ASIC is in production, it cannot be changed. Other chip types, such as the neuromorphic processing unit (NPU), whose structure mimics the human brain, have the potential to become mainstream in the future, but are still in the early stages of development.

 

So the first obstacle to fully autonomous driving is the AI chip industry’s manufacturing design.

 

Autonomous driving

 

High precision sensor

 

Perception and location is the premise of autonomous driving. (Click on the eight location perceptions of autonomous driving for a detailed understanding of the sensing point positions, advantages and disadvantages) Currently, the vehicle mainly uses radar to measure the distance in front of the vehicle, adopts cameras to identify objects, and adopts IMU and GNSS to predict the vehicle motion state.

 

In the future, lidar will be used to locate and perceive most obstacles, and thermal imaging will be added to identify animals and nighttime imaging.

 

According to Yole’s report, the cost of sensors in future self-driving vehicles will be about eight times the price of sensors in current vehicles.

 

The use of these prices are likely to come from a variety of sensors, the accuracy and reliability of the sensor upgrades, also due to the vehicle arrangement will come from the fusion is such as headlights and radar sensor placement, wei to decorate in auto ET7 automated driving radar cameras are pursuing streamline appearance and low wind resistance design of age have to say is necessary steps.

 

Electrical architecture

 

Sensor data fusion, the central controller data processing, the high efficient utilization is load data required by the autopilot electrical architecture but the current auto electrical architecture and supplier series are obviously haven’t prepare for this, the current electrical architecture is all sorts of function distribution with the development of auto industry and non-interference in each other or rarely superposition of interactive development, For example, many luxury cars are equipped with 360 viewing cameras, but it is difficult to display the virtual environment on the self-driving instrument, which is clearly not data fusion processing.

 

So the current automobile industry especially traditional electrical architecture reform inside the vehicle makers are in place, and of course more is because the whole industry chain in electrical architecture involving various input/output logic control module, also engaged in the supply chain to change, the traditional Ter1 suppliers such as Bosch, continental and by huawei, nvidia strong challenge.

 

Autonomous driving software and algorithms

 

The software algorithms for autonomous driving are far more complex than those for any current commercial aircraft. Here is a 15-fold increase in the amount of software available for current luxury vehicles, and a huge increase if you include future autonomous driving.

 

The idea of machine learning and AI has been around since at least the 1960s, and in fact algorithms that existed in the 1980s did very well, but deep learning didn’t really make much progress until 2006 or so. This may simply have been due to its high computational cost and the difficulty of conducting sufficient trials with the hardware available at the time. Therefore, the limitations and limits of the current autonomous driving algorithm are just like the limitations and limits of the current autonomous driving method in our article. In fact, the limitations and limits mentioned are more about the compromise between the computational power and economic value of the algorithm.

 

Another important part is the data analysis, processing and summary ability of the enterprise when the algorithm is formed, which requires a powerful data sorting server system to support the data sorting and the formation of the algorithm.

 

The Vehicle summary

 

Of course, the landing of autonomous driving is a historical trend. Obstacles do not mean stagnation, but just obstacles are opportunities. The above four points are also popular directions for autonomous driving at present. Is the current automotive supply chain transformation and capital intensive, talent gathering direction.

Leave a comment

You must Register or Login to post a comment.