Why is automated driving in favor of artificial intelligence?


liu, tempo Date: 2021-07-23 09:50:10 From:No matter how you perceive autonomous driving, believe me, if you experience even L2 level assistance systems, you will be impressed.
Views:66 Reply:0

No matter how you perceive autonomous driving, believe me, if you experience even L2 level assistance systems, you will be impressed.

 

 

Autonomous driving has become popular in recent years. In addition to the “relatively mature” technical conditions and the promotion of manufacturers as a highlight, another important reason is that based on the current driving environment, autonomous driving has become a “good recipe” for improving the driving experience and solving road congestion .

 

 

Improving the driving experience is very easy to understand. It refers to the use of automatic driving assistance to greatly reduce driving fatigue, or to rest and play or work and study on the road in the future. An extension of one step can be turned into solving road congestion or improving vehicle utilization, which not only saves people a lot of time, but also creates a huge economic effect on a large scale.

 

automated driving

When I attended the Audi MQ summit a while ago, I had the honor to listen to Dr. Kaifu Li’s speech on “Artificial Intelligence (AI)”. He has been studying and working in the AI field for 39 years, and his PhD thesis is titled “AI Speech Recognition”. Now he has 17 industry giants in his hands, 5 of which focus on the AI field.

 

 

Two aspects were very interesting in the speech. One is that although AI has already landed on the application level, such as voice, face recognition, medical field, logistics, autonomous driving, etc., in fact, these belong to Narrow AI and can only be used in a single The artificial intelligence robots in movies like “Artificial Intelligence” and “Her” belong to the category of Artificial General Intelligence (AGI); the other is that artificial intelligence now uses Deep Learning (Deep Learning) and The soaring amount of data is developing at an unimaginable speed.

 

 

Especially in the field of autonomous driving, this progress is very obvious.
Why is artificial intelligence?

 

To understand the meaning of artificial intelligence for autonomous driving, we must first start with the development history of autonomous driving.

 

 

The experiments on autonomous driving systems can be traced back to the 1920s. The first concept car about autonomous driving was unveiled at the Future World Exhibition Area of the New York World’s Fair in 1939. It was built by General Motors, but it didn’t really appear feasible until the 1950s. In 1958, General Motors finally produced a self-driving vehicle capable of relying on “metal nails” on the road and wireless signal transmission guidance (positioning information is given through road metal nails, and then the information is processed by the outside world to transmit instructions The vehicle keeps its lane centered).

 

 

In 1977, a semi-autonomous vehicle was developed by Japan’s Tsukuba

 

Mechanical Engineering Laboratory. In addition to the special road section, the vehicle itself also combined two cameras and an analog computer. However, the top speed of this semi-automatic vehicle is only 30km/h, and it can only be achieved with the help of elevated tracks. (In addition to relying on road information supplements, vehicles begin to have their own real-time information processing capabilities)

 

 

The emergence of true self-driving vehicles was in the 1980s, partly from the Navlab and ALV projects started by Carnegie Mellon University in 1984, funded by DARPA (Defense Advanced Research Projects Agency), and partly from The EUREKA Prometheus project between Mercedes-Benz and Bundeswehr University Munich started in 1987. In 1985, the ALV project demonstrated a vehicle that can drive automatically at a speed of 31km/h on a two-lane normal road. In the following two years, it added obstacle avoidance functions and off-road functions during the day and night.

 

 

As a milestone progress, the Navlab project used the No. 5 experimental vehicle to complete the coastal road (4585km) from Pittsburgh, Pennsylvania to San Diego, California in 1995, and 98.2% of the journey (4501km) was completed by autonomous driving. (The average speed is 102.7km/h). This great achievement was not broken by the Audi model modified by Delphi until 2015. This car completed a 99% autopilot rate across 15 states and drove a total of 5472 kilometers. Also in the same year, the states of Nevada, Florida, California, Virginia, Michigan, and Washington, USA, opened regulations that allowed licenses to test autonomous vehicles on public roads.

 

 

However, the above-mentioned vehicles are experimental in nature, and do not consider practicality and civilian use. They belong to the exploration of technology. What makes most people know and appreciate the charm of autonomous driving is that Tesla introduced the Autopilot function in 2014 (relying on 8 cameras, 12 ultrasonic radars, and 1 millimeter wave radar). NHTSA statistics show that even then In the imperfect AP system, it can also predict the possibility of collision with 76% accuracy and avoid more than 90% of predicted collision accidents.

 

 

If you carefully observe these development milestones, there is an obvious development trend. Autonomous driving functions are becoming less dependent on external information processing assistance, and are turning to the development of “single intelligence”.

 

 

The reason for this is very simple, because most of the road construction, traffic laws, and even urban construction are based on the development of “cars driven by humans.” The V2X model supported by 5G is good, but neither the financial investment in infrastructure nor the length of time is a practical model that can be put into practice in the short term. In other words, if there is no ” magic” to replace all vehicles with self-driving vehicles similar to those in movies, then it will inevitably mean that if you want to adapt to the current environment and realize automatic driving, you have to rely on yourself.

 

 

In the “single intelligence”, some people may think that the computer is the strongest, but in fact the “human brain” is the strongest. The converted electrical power of the human brain is approximately equal to 20 watts. In contrast, the power of an equally powerful computer is approximately 24 million watts. Professor Tim Hanson called the brain “the most dense, most structured, and most self-constructing substance known.” More importantly, the way the brain processes information is different from ordinary computers. Humans can perform abstract thinking, association, self-learning, and so on. However, traditional programming cannot achieve adaptive upgrades. There are endless possibilities for things that happen on the road. It is obviously impossible to deal with “infinite” with “limited”.

 

 

Therefore, the artificial intelligence of the “human-like” processing mode has become the core technology to promote the development of autonomous driving.

 

 

What is artificial intelligence?

 

Creatures 600 million years ago had no neural structure and could not think or process information. The goal of life was to wait for death. Until the emergence of jellyfish 580 million years ago, jellyfish possessed the world’s earliest nervous system-a neural network, which can collect important information from the surrounding environment and process it through the neural network to react. In order to survive, they have the ability to collect and process information.

 

 

The flatworm appeared 550 million years ago. After collecting information through nerves, it can be transmitted to its head (the earliest brain in the world, the earliest central nervous system in the world) for unified decision-making, rather than simply directly dealing with informatio. 265 million years ago, frogs appeared, and mammals and mice appeared 225 million years ago. As the animal’s own structure and surrounding environment became more and more complex, the original brain structure was no longer sufficient to deal with various “demands”, for example, based on the original brain. , Forming another “command center”-the world’s earliest limbic system.

 

 

Later, with the emergence of monkeys and primitive humans, Cortex came into being in the brain. It can realize “thinking”, generate complex ideas, derive imagination through missing information, formulate long-term plans, and more.

 

 

In essence, if there is no cortex, humans are almost like frogs, and if the limbic system is removed, they are almost like reptiles. This is why the gray matter and limbic system of the human brain rank the first “smart” and second “smart” in the entire brain, because they respond to extremely complex and complex needs, respectively. Others are mostly responsible for basic survival necessary reactions and transmission of information and other functions, such as breathing, heartbeat and other life-sustaining functions, and allocating information from nerves throughout the body to designated functional areas.

 

 

The cortex is responsible for almost all “human” things, things that are seen, heard, and felt, language, movement, thinking, planning and even personality are processed here, which is the foundation of human wisdom. The limbic system is a system for survival, and everything related to survival is controlled by this. At the same time, this is also the place where emotions are generated, because emotions also determine survival. So in essence, if you have two “little men” fighting in your head, then the cortex is “sane” and the limbic system is “desire”, but the limbic system tends to dominate, that is, the limbic system commands the cortex.

 

 

The frontal lobe is in the cortex. It controls your personality and many things related to thinking, such as theory, planning, and executive function. A lot of your thinking is done at the front of the frontal lobe, which is called the prefrontal cortex. The prefrontal cortex is another role that will appear in those inner struggles, the rational decision maker, the person who pushes you to do things well, the sincere voice that tells you not to care about other people’s ideas, and the leader who hopes you can have a big picture.

 

 

The Nobel Prize-winning “Frontal Lobe Removal Surgery” is very famous. It removes 1/3 of the frontal lobe and is used to treat mental illness. However, it was later rated as a demon’s absurd behavior, because after the removal, this person is indistinguishable from “The Walking Dead”.

 

 

Although the “computing power” of the human brain is very high, it is actually different from the operating mode of ordinary computers, especially floating-point operations (such as arithmetic). It is simple to estimate that the human brain is 10 million times slower than the computer, but slow does not mean “stupid”, such as you can’t calculate all the acceleration and speed numbers when riding a bicycle, but you can ride steadily.

 

 

Computers process information mostly in a continuous serial mode. Unlike the human brain, there are about 15-20 billion neurons in the cortex. These neuron structures can send signals to all other neurons at the same time, that is to say, they are processed in a parallel structure. So your processing of information is a network structure, which is related to each other. For example, when you are walking at a road junction and you see a car suddenly turning on the wiper on a sunny day, you immediately know that the person is about to turn. This process is handed over to ordinary people. The computer can’t do it.

 

 

So the artificial neural network was born. Its original goal was to simulate the structure of the human brain and build a system to solve problems in the same way as the human brain. However, due to technical limitations, it gradually deviated from the original biological original intention. And use this method to focus on performing specific tasks.

 

 

The greatest ability of this system is learning.

 

Artificial intelligence is actually a large category with many definitions. In summary, it refers to a human-like working mode, that is, simulating human thinking processes and intelligent behaviors (learning, reasoning, thinking, planning, etc.), but it is only due to technology. Restrictions, currently artificial intelligence can only achieve learning capabilities. Deep learning is a method, one of the machine learning methods applicable to the “device” of neural network.

 

 

Through deep learning, neural networks are trained to achieve some of the advantages of the human brain’s working mode-that is, the current artificial intelligence.

 

 

Neural network and deep learning

 

In the final analysis, the current applications of AI use deep learning methods to transform the original “limited” programming method into a solution that can self-upgrade to “infinite” to deal with the “infinite” problem.

 

 

Alan Mathison Turing, a British mathematician and logician, is known as the father of computer science and artificial intelligence. Turing entered King’s College, Cambridge University in 1931, and after graduation went to Princeton University to study for a doctorate degree. In 1936, Turing submitted a paper to the prestigious mathematics magazine in London entitled “On the Application of Numerical Calculations in Decision-Making Problems”. In this groundbreaking paper, Turing gave a strict mathematical definition of “computability” and proposed the concept of the famous “Turing Machine”. The “Turing machine” is not a specific machine, but a thought model that can create a very simple but extremely powerful computing device to calculate all imaginable computable functions. In 1950, Turing published an epoch-making paper in which he predicted the possibility of creating a truly intelligent machine. Noting that the concept of “intelligence” is difficult to define exactly, he proposed the famous Turing test: If a machine can start a dialogue with humans (through telex equipment) but cannot be identified as the machine, then it is called this machine. The machine is intelligent. This simplification allows Turing to convincingly illustrate that “thinking machines” are possible. In 1952, in a BBC broadcast, Turing talked about a new specific idea: let the computer pretend to be a person. If less than 70% of the judges are right, that is, more than 30% of the referees mistakenly believe that they are talking to a person instead of a computer, then it is considered a success. This is the famous Turing test in the artificial intelligence industry.

 

 

In 1943, Warren McCulloch and Walter Pitts wrote a paper about how artificial neural networks work, and used circuits to create a simple model. Later, after the efforts and research development of many people, it was not until 1998 that Bernard Widrow and Marcian Hoff of Stanford University created the first artificial neural network for solving practical problems.

 

 

In 1956, at the Dartmouth Summer Conference, various big cows put forward the definition of AI, which greatly promoted the development of AI and artificial neural networks, and it was widely regarded as the first year of AI. At that time, people were full of confidence and believed that it would take less than
20 years to build an AI system that was almost the same as the human brain. As a result, in continuous research, it is found that the algorithm of deep neural network is too complicated to start. So abandoning the original “big and comprehensive” goal form, and turning to implement a single goal as the direction.

 

 

The concept of deep learning originated from the study of artificial neural networks, using deep neural networks to realize a method (algorithm) that simulates the analysis and learning of the human brain. In 1965, Alexey Ivakhnenko and Lapa published a general learning algorithm. Later in 1971, they published a paper describing the use of this algorithm to train 8-layer deep networks. Later, through the efforts of different people, new deep learning algorithms continued to appear, suitable for solving different problems.

 

 

The traditional method of machine learning is to use complex high-order functions to transform the problem into a two-dimensional, three-dimensional, or even tens of thousands of high-dimensional space, so that the data can be dispersed to facilitate processing. But no matter how hard mathematicians and scientists work hard and create smart modeling methods, modeling means that it has limitations and it is difficult to truly simulate the characteristic laws of everything in the world. Deep learning has jumped out of this thinking. Although it also upgrades the data into a multi-dimensional space, the processing method is completely different.

 

 

Deep learning throws a lot of data into a complex, multi-layered deep neural network (generally an artificial neural network does not have an intermediate hidden layer, and there are 1~n hidden layers called a deep neural network), and then directly check the passing whether the result data obtained by this network processing meets the requirements. If it meets, keep this network as the target model. If it doesn’t, keep adjusting the specific parameters of each part until the output meets the requirements. In other words, what is happening in the hidden layer of the deep neural network, no one can fully explain so far, it is like a “black box”, that is to say, it is impossible to accurately and detailedly figure out what each adjustment has on the final result and what kind of absolute causality.

 

 

The above figure uses 4 hidden layers with 5 neuron nodes in each layer. The dot map on the right is the pattern that needs to be recognized. The final base image is the result of the judgment after the neural network processing. As the number of training sessions continues to increase, you can notice that the error rate in the upper right corner will soon reach 0.001. Of course, this data processing is relatively simple, but the principle is the same. Each neural node will change with the training, and “self-correct” according to the final result feedback, which causes the judgment of different nodes to increase (the dotted line in the middle).

 

 

This is very similar to the neural structure of humans. Every time humans learn and recognize, they will cause slight changes in the physiological structure of the nervous system. When you are experienced enough, you can give feedback directly without thinking time. This is nerves. The element itself has found the best “path”, and the neuron itself may even become “strong”, and the amount of chemical information generated will increase to “adapt” to a certain situation.

 

 

To use the analogy of the human learning process, a thing consists of many patterns or features, such as an apple. The process of recognizing an apple can be roughly simplified to: the first time you see an apple (INPUTS), you will receive color information. Shape information, through the combination of color information, through the basic information can further know the combination information, such as color distribution, pattern, etc. (HIDDEN LAYER). When you are told that this thing is called an apple, you will store the characteristic information corresponding to the apple (OUTPUTS).

 

 

Then if you see apples of different colors and shapes, you may not be 100% sure that it is an apple, but after comparing the information, you can guess that it is an apple with a high probability. Then, your judgment on whether this is Apple will become more and more accurate. Of course, this is only limited to visual information. If you add other information such as touch and smell, the judgment result will be more accurate.

 

 

With more and more characteristic information stored, the brain will continue to have more characteristic information to judge, and the result will be closer to the correct answer step by step. It’s like if you know a certain model, maybe you can tell the model precisely by just seeing the grille or the mirror. This is because there is too much relevant information stored in your brain, and every feature can continuously reduce the number of answer options. According to the appearance, color, etc., you know that it is a car, and the further combination information shows that it is a car of a certain brand, and then cross-judging based on some characteristics to know that it is a car of a certain model.

 

 

The judgment is not limited to objects. For example, when two people are holding hands, the basic visual information judges that they are holding hands, and then according to the body shape information, it is known that the two are of the opposite sex, then the relationship between the two may be father and daughter, mother and daughter, couples or other possibilities, and then judged by their appearance, the age may be further reduced, and the relationship between the two can be derived. Or when two people punch and kick each other, and learn information through specific expressions or hearing between them, they can predict whether they will go to the police station or drink or break up or get a divorce.

 

 

That is to say, when the characteristic information is input into the brain, there will be many possibilities for each information judgment result, and through the mutual influence and restriction, the probability of the wrong result is gradually reduced, and the highest probability is judged in the final candidate result, if Through training or teaching, the answer will get closer and closer to the answer.

 

 

This means that a large amount of data is needed to support training and powerful computing power to perform calculations.

 

 

In his doctoral dissertation, Dr. Kai-Fu Li’s speech recognition function was very large in the speech database he used at the time, but in fact it was only 100MB, but it cost his tutor nearly $100,000, which was equivalent to the price of two houses in 1988. In the era of big data, the digitization of various information provides massive amounts of data, so it can support the rapid development of AI.

 

 

 

Another issue of computing power is that traditionally designed CPU chips are not designed for neural network computing models and are extremely inefficient. However, GPUs have similar data computing models, so GPUs were mostly used for AI development in the past. Later, Neural Processing Unit (NPU) was developed separately for this purpose. Of course, there are other similar names, such as TPU (Tensor Processing Unit), NNP (Neural Network Processor), IPU (Intelligence Processing Unit), etc., greatly Improve the speed of neural network processing data.

 

 

Autopilot

 

After going around such a big circle, we return to the problem of autonomous driving.

 

 

The biggest difficulty of automatic driving is actually “establishing accurate coordinates”, that is, taking yourself as the center, the three-dimensional coordinates of all the surrounding things determine whether automatic driving can be realized. Musk once said in an interview: “For example, in the GTA5 game, if you don’t actively interfere with the AI computer car, it can drive safely at any time.”

 

 

The control of vehicles is no longer a difficult point for car companies. For example, George Hotz has enabled more than a dozen ordinary vehicles to achieve L2 autonomous driving through a set of several thousand yuan equipment (this person was once a member of the Tesla AP team Later, he established Comma.ai and used deep learning to create an OpenPilot system. It only used some simple devices to connect to the car, plus the algorithm he built, and converted it into L2 level autopilot).

 

 

The old production talks about the classification of automatic driving: SEA Level 0: No automation. It’s hard to see a Level 0 car at the moment, because any device that is electronically connected to help you safe can be classified as L1, so L0 means that there is no ABS system. SEA Level 1: Driver assistance. The driving state and European intervention functions are all driver assistance, such as the ABS just mentioned, and ESP, etc., as well as the frequently used cruise control, ACC adaptive cruise, and LKA lane keeping assist. SEA Level 2: Partially automated. The most significant difference is maintaining control in both directions at the same time. For example, the combination of ACC and LKA realizes the functions of automatic car following and lane centering, which can reduce driver fatigue during normal city following and high-speed road cruises. At present, most car companies do this kind of function (Mercedes-Benz, BMW, Volvo, Cadillac, Nissan, etc.). Of course, many car companies now have the function of turning on the turn signal to change lanes. Even Nissan can semi-automatically change lanes and overtake on designated high-speed sections. Tesla NOA realizes automatic overtaking on high-speed sections, etc., which can be classified as L2. 5 levels. SEA Level 3: Conditional autonomous driving. For example, when the Audi A8 is below 60km/h, it can be completely handed over to the vehicle to drive without human intervention. In this environment, people do not need to supervise the vehicle at all. SEA Level 4: Highly automated driving. Most areas and road sections are completely driven by vehicles without supervision. Reaching this level basically means that in a certain area or even most road sections in a certain country, a destination can be specified, allowing vehicles to drive autonomously without intervention. In fact, it can also meet most of the needs. After all, not everyone drives around the world. SEA Level 5: Fully automatic driving. By designating a destination, the vehicle can drive to the destination automatically in all working conditions and areas without any supervision and intervention, and can even implement the requirements of which side to drive according to local regulations.

 

 

In the current global autonomous driving scheme, no matter how many schemes there are, the choice of sensors is fundamentally indispensable, that is, the way the first level of data is provided. Cameras, sonic radar, millimeter-wave radar, and lidar are basically nothing more than these. In the divergent debate, it is nothing more than “It is impossible to achieve without lidar”, “It can be achieved without lidar”…

 

 

Different sensors have their own advantages and disadvantages. Multiple and multiple sensors are nothing more than to achieve “data diversity and coverage” to provide sufficient information for AI processing. The most controversial is the lidar. The reason why Musk did not use this sensor is very simple. The price is expensive and the installation layout affects the overall shape. These two are precisely the two most lethal points of a mass-produced car. No matter how great the advantages of lidar are, if these two problems cannot be solved, it will not be enough to be installed in mass-produced vehicles and used by a large number of users.

 

 

In fact, except for the Audi A8 equipped with four-wire beam radar, almost all manufacturers are now adopting a “pure vision solution”, that is, using cameras, acoustic radar, and millimeter wave radar as sensors, and the results are obtained through AI algorithms. The surrounding environment parameters and guide the car to drive automatically. Of course, for example, Cadillac, Nissan and other manufacturers, their excellent experience is inseparable from the credit of high-precision maps, which is to store most of the 3D modeling of the surrounding road environment in advance, and then cooperate with GPS to handle basic driving, plus The “visual plan” was added to deal with temporary road conditions.

 

 

Regarding high-precision maps, the author privately believes that there is no point of contention. If there is, it will reduce the amount of data processing, but if not, it will not affect the completion of automatic driving. Of course, it may be better if it is adopted, but the laws and regulations in different countries are not the same. It may take some time to promote the high accuracy of comprehensive maps.

 

 

Here we take the most representative Tesla autopilot solution as an explanatory note on how AI promotes the rapid evolution of autopilot.

 

 

Another important reason for Tesla to adopt a “pure vision solution” and abandon lidar is because “once the perfect realization of neural network-based visual recognition, lidar will be worthless (for autonomous driving).”

 

 

Tesla uses a fully convolutional neural network (there are also a variety of neural network structures, such as recurrent neural networks, feedforward neural networks, different structures can play a role for different purposes), this type of network is mainly for image recognition, That is, the most needed part of autonomous driving.

 

 

Although Tesla launched the AP system in 2014, its actual use effect has improved, but limited by the performance of the on-board processor (the first generation is Mobileye Q2, the second generation is NVIDIA Xavier), the progress has not been too rapid. And when Tesla released its self-developed chip
FSD at the beginning of this year and installed it on all Tesla models, more than half a year, the effect of the AP system has improved significantly.

 

 

Through Tesla’s huge user database and large-scale simulator equipment, first “teach” the system to recognize the inbound road dividing line, surrounding vehicles, road boundaries and traffic lights (plates), etc., and make motion trajectories for moving objects forecast to determine the drivable area.

 

 

Through a large amount of data input over and over again, mark which is the correct answer, and progressively feedback it to the superior neuron for correction, such as reducing the proportion of a certain path, or even closing a certain neuron, but when the correct rate is high enough, it determines the model and applies it to the vehicle.

 

 

Note: After a large amount of data is trained, what is transmitted to the vehicle is only a copy of the “model formula”, and there is no need to transmit a large amount of original data and calculation results (because no one knows what happened in the hidden layer), so the vehicle itself is not It will carry a huge amount of data, but it is only an upgrade of the system or chip. Just like the process of human learning, when you see a certain question, you can directly quote the learned method, perform instant calculations, and get the correct answer, instead of deriving formula methods from basic mathematics in ancient times.

 

 

If there is a wrong answer, it can be manually corrected and “fed back” to the network so that it can adjust itself to find the “path” for the correct answer.

 

 

The training includes various road widths, tunnels, loops, high speeds, etc., while also dealing with different weather conditions.

 

 

When the initial learning reaches a certain level, simulator training is also needed to automatically generate various driving environments. Although simulation training will be very efficient, it is a simulated driving after all, not an actual environment. Even if the simulator is “perfect”, there is still a gap between it and the real world. Therefore, Tesla also collects data through on-board sensors, and when humans intervene in automatic driving, it records the processing results of the neural network at this time, learns the corrective operations made by the driver in reverse, and optimizes itself.

 

 

In addition, Tesla will also use the Shadow mode when driving automatically. In the simulation, it will be “trial and error”, that is, assuming that the vehicle has changed lanes at this time (there is no actual), and based on what happens in the next few seconds, it can be judged just now whether the decision is correct or not, so as to improve its accuracy.

 

 

In addition, for the most controversial lidar issue, Musk believes that the learning ability of neural networks can be used in other ways, coupled with the huge amount of “display environment driving data”, to fully achieve the same effect. Humans rely on the three-dimensional perception generated by the cross of the images obtained by the two eyes to determine the distance information, and even for some animals whose binocular images do not cross, rely on the cross-perception of the two images after displacement to obtain the three-dimensional perception, and finally determine the distance information.

 

 

Tesla built a 3D model of the surrounding environment by cross-analyzing the images of 8 cameras and supplementing the front millimeter-wave radar information. Coupled with the accurate distance information judged by the radar, the overall spatial shape of the object can be judged with the image, and integrated together, it can have a more detailed and accurate distance judgment on the object. The principle is that even if the object is in motion or under different viewing angles, its shape and size are basically unchanged. Through cross-analysis of the image information of the same object under different frames, the parameters of its shape are kept unchanged, so that accurate calculations can be made. 3D modeling.

 

 

It is through these methods that Tesla has continuously optimized and upgraded its neural network for more than half a year. While optimizing the original automatic driving assistance system, it also provides advanced calling functions, using its powerful learning capabilities. (At present, all aspects of the AP system experience have been greatly improved. In terms of identification, roadside animals and roadblock ice cream buckets can accurately identify the location, and the lane change and NOA functions are more “confident” rather than hesitant)

 

 

The most important point is that the AI system will not be exhausted due to the length of learning time, so it can even “open more” for all-weather learning. It is a bit like Naruto’s learning to use the shadow avatar in Naruto. This learning speed is difficult to be estimated.

 

 

Google’s AlphaGo learned by playing chess with masters in the R&D phase of 2014, and then made its debut in October 2015. It defeated professional Go player Fan Hui on a 19-way board without any move (the model is AlphaGo at this time). In March 2016, he defeated Korean chess player Lee Sedol and was awarded the honorary professional nine-dan by the Korean Chess Academy (at this time the model is AlphaGo Lee). On July 18, 2016, AlphaGo ranked first in the world on the Go Rating website, and was overtaken by Ke Jie a few days later.

 

 

Then until the beginning of 2017, the re-enhanced AlphaGo Master combined the advantages of different neural networks and changed the deep learning algorithm. Compared with the previous version, the grade score increased by 1100 points. During this period, it challenged China, South Korea, Japan and Taiwan without disclosing their identities. Master, 60 wins. In May 2017, AlphaGo Master challenged Ke Jie, the world’s number one chess player, and at the same time cooperated with eight-dan players to fight against five top nine-dan players. In the end, they won 3:0, team battles and team battles. Subsequently, an enhanced version of AlphaZero was upgraded to defeat AlphaGo Master.

 

 

However, the requirements of autonomous driving will be higher, and it is for mass production market applications. The cost, safety and other issues to be considered are more complicated, so it cannot be as fast as AlphaGo. But despite this, the speed of development of autonomous driving is amazing, both in terms of safety and practicality.

 

 

Concluding remarks

 

For automatic driving, practicality and convenience are second, and safety is the most important. In fact, people’s distrust of autonomous driving essentially stems from their ignorance of the underlying technology and current progress, and the unknown is fear.

 

 

Think about it from another angle. If you take an elevator, do you believe in an old manual elevator controlled by a stranger, or are you willing to believe in a fully automatic elevator? Imagine if you are a pedestrian, do you want to believe that a stranger is driving a 2-ton “dangerous machine”, or do you want to believe that a safer AI controls the vehicle?

 

 

You should know that the average human driving skills are very poor. If you have ever watched “Traffic Accident Collection”, you can understand that when people drive, they are simply “a great reward for human confusing behavior.” The advent of autonomous driving is not to deprive us of driving pleasure, but to bring convenience to driving, a way to traffic, and a sense of science fiction that can make people feel that the movie has entered reality.

 

 

At least the car of my childhood dream is not Ferrari or Porsche, but KITT from “Knight Rider”.

Leave a comment

You must Register or Login to post a comment.