Applications of artificial intelligence algorithms in automated driving


liu, tempo Date: 2021-07-26 09:59:48 From:ozmca.com
Views:62 Reply:0

A few days ago, the news was quite funny. A Tesla in the US was stopped by traffic police. The driver fell asleep in the car… and even insisted that he was not involved in drunk driving because he did not drive. I suddenly remembered that this type of incident had not happened for the first time, so he extended his thoughts.

 

Tesla’s Autopilot is an indisputable benchmark position in the field of “autopilot”. Even if it is a benchmark, Tesla Autopilot’s negative news is flying all over the sky. A big reason is that there are too many accidents, so that Tesla’s official website has been low-key. The description of allegedly boasting and exaggerating ability was removed. When consumers slowly believe in the development of autonomous driving technology, any accident may be a fatal blow.

 

Since “autonomous driving” was proposed by ambitious auto industry giants in the early 20th century, it has always been a long-awaited travel technology. Since the DARPA Challenge in 2005, autonomous driving based on vehicle intelligence has entered a period of rapid development. From Internet giants to traditional auto companies have invested heavily in trying to lead this revolution in travel technology. The core of this revolution is artificial intelligence. This section will give an overview of the ascendant application area of autonomous driving and introduce the role of artificial intelligence in it.

 

■ Why driverless?

 

autonomous driving

 

Safety: According to statistics, an average of 103 people dies in traffic accidents every day in the United States alone. More than 94% of collisions are caused by driver error. In theory, a perfect autonomous driving solution can save 1.2 million lives every year. Of course, the current autonomous driving is far from perfect. But with the advancement of algorithms and sensor technology, people believe that in the near future, autonomous driving will exceed the driving safety rate of human drivers.

 

Convenience: Autonomous driving can free the driver from behind the steering wheel for work and entertainment while riding in the car. There are nearly 140 million office workers in the United States. Except holidays, they spend nearly an hour on their way to and from get off work every day. If you add up the time of all office workers in a year, there will be more than three million years, enough to complete 300 Wikipedia books, or 26 Egyptian pyramids.

 

Efficient sharing: Uber, Lyft, and Didi and other giants of shared travel are actively studying autonomous driving, because the biggest cost of shared travel comes from the driver’s time. If autonomous driving can be achieved, then people can stop buying and owning cars and rely entirely on shared travel, which will save each American family $5,600, which is about 10% of their average annual income.

 

Reduce congestion: If these advantages depend on the widespread popularity of autonomous driving, then the advantage of reducing congestion can be said to be immediate. According to a study by Professor Walker of the University of Illinois, in a manually-driven fleet, as long as one self-driving car is added, the standard deviation of the fleet’s driving speed can be reduced by 50%, making driving more stable and fuel-efficient. If you see a self-driving car with a radar on the roof on the road, please thank it, because it is helping you reduce the congestion in front of you.

 

In order to measure the above benefits more intuitively, we might as well make an estimate of the value that these benefits can bring to the U.S. travel market. The total cost saved is about 5.3 trillion U.S. dollars per year, which is 29% of U.S. GDP.

 

■ Definition of Autonomous Driving

 

The term “autopilot” originated from assisted driving systems in the fields of airplanes, trains, and shipping. Its broad definition is: Autonomous driving is a system used to automatically control the trajectory of vehicles without continuous manual intervention. According to the degree of automation and driver’s participation, the International Association of Automotive Engineers divides autonomous driving into 5 levels, as shown in the following table. Taking the mass-produced cars that have been released now as an example, the Audi A8 is at L3, Tesla is at L2.5, and the high-end new cars of Volvo, Nissan, BMW, and Mercedes-Benz are at L2. If a vehicle can achieve both adaptive cruise and lane keeping assist at the same time, then this vehicle has stepped into the threshold of L2. The semi-autonomous driving system “Super Cruise” of the 2018 Cadillac CT6 is a typical L2 level.

 

Evolution of Autonomous Driving Technology Route

 

From the day when the concept of autonomous driving was put forward, there have been two completely different technical routes. One is based on the infrastructure on the road to help vehicle positioning, navigation and decision-making; the other is that the vehicle can independently complete driving through its own sensors and intelligent calculations without changing the existing road.

 

The earliest milestone of the first technological path was the “Futurama” (Futurama) exhibited by General Motors at the 1939 World Exposition in New York. With the craze of laying highway networks in the United States in the 1950s and 1960s, American Radio Corporation and General Motors jointly developed a prototype of an electronic highway. On a modified highway, electromagnetic coils were used to guide two GM snowmobiles. The Frye car is driving in the lane and keeps a distance from the front and rear cars. However, the electronic highway is subject to high infrastructure costs and the inconsistency of regulations and standards between states in the United States. It is still in the government-supported Internet of Vehicles (V2X) research project.

 

In contrast, the second technological path of autonomous driving: automated vehicles derived from the research branch of autonomous robots, has achieved considerable development in the past 10 years. Bringing a breakthrough in this direction was a budget passed by the US Congress in 2001: it funded the US military’s research organization DARPA to achieve the goal of one-third of military vehicles using automated driving by 2015. . DARPA sponsored three autonomous driving challenges between 2001 and 2007. In the 2005 challenge, 5 driverless cars used artificial intelligence systems to successfully complete about 212 kilometers of off-road track. Among them, the “Stanley” team of Stanford University, which won the championship, abandoned the method based on artificial rules and adopted data-driven machine learning technology to train vehicles to recognize obstacles and respond. Chris Urmson, the leader of the CMU team in the 2007 DARPA race, later became the technical leader of Google’s driverless project. By 2014, the driving range of the Google self-driving car he led had reached 1.12 million kilometers. Urmson said with emotion: “Two years ago, we absolutely couldn’t cope with the thousands of complex road conditions on city streets, but now autonomous driving can handle it with ease.”

 

■ Autonomous driving and artificial intelligence

 

The supporting technology of autonomous driving can be divided into the following three layers.

Upper level control: route planning, traffic analysis, traffic arrangement.

Middle level control: object recognition, roadblock monitoring, compliance with traffic regulations.

Low-level control: cruise control, anti-lock brake, electronic system to control traction, fuel injection system, engine tuning.

 

Each layer can use artificial intelligence technology.

 

The occupancy grid is a digital repository that stores information about physical objects around the car. Some of the entities occupying the grid are stationary objects derived from stored high-definition maps, while others are moving objects identified by cars based on real-time signals from sensors. Color coding and icons are usually used to visualize the occupation grids corresponding to frequently-occurring objects.

 

With the occupation grid, the current position of the object is known. Obviously this is not enough. Autonomous vehicles also need to know where objects may appear in the future t time.

 

The uncertainty cone is a tool used to predict the location and moving speed of objects near the car. Once an object is marked by the object recognition module based on deep learning, the occupancy grid will show its existence, and the uncertainty cone will predict the next movement direction of the object.

 

The Uncertainty Cone provides an artificial intelligence version of the scene understanding capabilities for driverless cars. When a human driver sees a pedestrian standing too close to the car, he will think in his mind to avoid it; in a driverless car, the use of uncertainty cone technology will also carry out similar “thinking in mind.” A stationary object such as a fire hydrant will be represented by a thin cone, because it is basically unlikely to move. In contrast, a fast-moving object will be represented by a wide cone, because it may move to more places, so its future position is uncertain. The human driver does not clearly mark every nearby object as an elliptical cone in his mind. However, the uncertainty cone is roughly the same as the processing process in the human subconscious. Our brain constantly records and updates the people and objects that appear around us. Combining our past experience and the state of things in front of us, we can guess the intentions of these surrounding things and predict what they will do next.

 

The middle-level control software creates the uncertainty cone as follows: First, draw an object on the plane, and draw a small circle around the object. We call it the “current active circle”; then, draw a large circle and mark it. All the positions that objects may reach in the next 10 seconds are called “future circles”. Finally, use two lines to connect the edges of the small circle and the large circle. This is the uncertainty cone.

 

The uncertainty cone replaces the effect of eye contact between human drivers and pedestrians. From the perspective of a driverless car, a pedestrian standing on the side of the road facing the street will be represented by a slightly leaning forward cone, indicating that she may cross the street at any time. If her eyes are not staring at the front, but at the phone, her cone icon is another shape, perhaps even narrower, because she is not ready to move on. If she glances at the driverless car, her cone icon will shrink further, because the car’s software will recognize that she has seen the car, and it is unlikely to get in the way of the car. The more unpredictable is the pedestrian, the larger the shape of the cone is. Cyclists who are swaying have greater uncertainty than stationary pedestrians, and the corresponding cones are also larger. Puppies running around or children chasing the ball will be represented by a larger cone.

 

Sometimes, even a static target may use a large cone to express its uncertainty, such as buildings with obscuring properties. Although they are unlikely to move, they may obscure some moving objects. For dead ends, corners, or a car parked at the side of the road with an open door where passengers may get off at any time, the middle-level software system of the driverless car will mark a large uncertainty cone. A stationary school bus may also produce a large cone of uncertainty. Although the school bus itself may not move, children may run out from behind the bus at any time.

 

When the objects near the car are marked and represented as uncertainty cones of different sizes, a module called “trajectory planner” can calculate the best route based on this, and guarantee Follow traffic rules to reduce travel time and collision risk.

 

According to the uncertainty cone of surrounding objects, the trajectory planner can calculate the best route

 

■ Commercialization of Autonomous Driving

 

Many application scenarios of artificial intelligence have looser fault tolerance. For example, when a sweeping robot hits an obstacle, it can step back and find a path; Siri’s voice recognition is wrong, and the user just needs to say it a few more times. However, autonomous driving applications require more stringent safety standards. Because the artificial intelligence algorithm in the car is wrong, the loss caused is irreparable. For example, in a car at a speed of 100 kilometers per hour, a pedestrian crossing a road on the side of the road is misjudged as a static pillar, which will directly lead to casualties, with disastrous consequences.

 

For this reason, the existing commercial deployment of autonomous driving is mainly in closed parks and fixed lines with strict control. For example, London’s Heathrow International Airport uses self-driving shuttles to pick up and drop passengers between the parking lot and the T5 terminal, as shown in the figure below. This transportation service is called Heathrow Pod and has been in operation since 2011. Any passenger can travel for free from Heathrow Terminal T5.

 

Similar models suitable for (semi-)closed road sections include Induct Navia and Arma, and their appearance has also led to the commercialization of autonomous driving step by step.

 

In addition, because the cost of lidar, a key sensor component in autonomous driving, remains high, the unit price of trucks and driver costs are higher, and trucks are used in high-speed or closed ports, etc., it may be earlier than family cars. Realize the commercial use of autonomous driving.

 

■ Skills required by autonomous driving algorithm engineers

 

After reading the above introduction, readers may be eager to participate in this upcoming technological revolution. If you have a solid foundation in one of the following three fields, you can easily get an offer in the field of automatic driving and become an automatic driving algorithm engineer.

 

Computer vision: deep learning, road sign recognition, lane line detection, vehicle tracking, object segmentation, object recognition.

 

Sensing and control: signal processing, Kalman filtering, automatic positioning, control theory (PID control), path planning.

 

System integration: robot operating system, embedded system.

It will be much more difficult to use unmanned driving alone for private cars on open roads, but it will be easier and more profitable to apply on specific routes in specific areas. Many urban driverless buses have been tested on the road.

 

There are already many specialized autonomous driving training courses or MOOCs, which can help you learn the above technologies in a more in-depth and systematic manner. I also hope that readers can make achievements in related fields and make the green light on the road to offer.

Leave a comment

You must Register or Login to post a comment.