With AI capability and super computing power, can Tesla’s automatic driving technology take a new level in the dispute?
It is the custom of Tesla (NASDAQ: tsla) to show muscles with a specific theme one day every year. After showing battery technology last year, Tesla’s theme this year is AI. On the evening of August 19 local time, Tesla AI day released the latest progress of supercomputer Dojo, self-developed AI training chip D1, automatic driving pure vision scheme and Tesla robot.
In general, the content of Tesla AI day mainly focuses on automatic driving. In the pure vision scheme of automatic driving released by Tesla, it is mentioned that the eight cameras equipped with the vehicle will simultaneously process tasks such as target detection, traffic sign recognition, Lane prediction, etc. the vehicle can model the lane and environment 4D in real time during driving. The supercomputer Dojo and the self developed AI chip D1 will serve the Tesla autopilot system.
However, a few days before AI day, Tesla was just investigated by the National Highway Traffic Safety Administration (NHTSA). On August 16 local time, NHTSA said that after a series of crash accidents involving emergency vehicles, it had launched a formal safety investigation on Tesla’s autopilot, which covered about 765000 Tesla vehicles in the United States.
Last year, Tesla’s unsatisfactory performance on battery day led to Tesla’s share price falling by more than 6% and evaporation of 200 billion market value. This year, Tesla was investigated before the AI day, and its share price once plunged more than 7%. The response of the capital market to this AI day is not strong. Tesla’s share price closed down 2.3% in the previous trading day. In the pre trading on August 20, the share price rebounded slightly, up 0.54%.
What new technologies did AI day talk about?
On this AI day, Tesla continued to optimize the pure vision automatic driving technology scheme in the dispute.
One is the structure called hydranets, which is mainly to optimize the camera’s ability to process images through neural networks. The working principle of Tesla’s pure vision technology scheme is to use 8 cameras of the body to provide the original image, and then carry out computer vision operation.
Andrej karpathy, senior director of Tesla AI, said that Tesla’s FSD system in the past was not perfect. There was no problem with the monitoring of a single camera, but the vector space was not enough. In addition, in order to solve the problems encountered in using the autopilot suite in the past few years, Tesla redesigned neural network learning and simplified the task by using multi head route, camera calibration, cache, etc. To achieve this ability to process images, at least 50 neural networks must be run, so the hydranets structure was born.
In short, the advantage of hydranets structure is trunk sharing, and the reasoning and training speed of FSD system are improved. It should be mentioned that Tesla is not equipped with high-precision maps. Its pure vision technology scheme can enable vehicles to perceive and detect road conditions in real time, so as to draw simulation maps. On the technical path of automatic driving, there has always been a dispute between pure vision school and radar school.
In May this year, Tesla announced that it would abandon the use of radar sensors and adopt a pure visual perception system in the autopilot of model 3 and model y vehicles in the North American market. This is different from the combination of laser radar, millimeter wave radar and camera adopted by other companies.
Tesla believes that the pure vision scheme has fewer interference signals and the data collected by the system is more “clean”, which is conducive to neural network learning. Moreover, the new model will carry 8 cameras, the coverage can be fully covered by the body, and the safety of the automatic driving system will not shrink due to the loss of radar.
But Tesla’s explanation is not convincing enough. Raj Rajkumar, Professor of electronic engineering at Carnegie Mellon University, said that one of the consensus in the automatic driving industry is that people should make good use of different kinds of detectors and summarize and integrate the information they provide. Steven Shladover, a researcher at the University of California, Berkeley, believes that removing millimeter wave radar will make driving in extreme weather conditions a very dangerous thing.
Tesla’s self-developed AI training chip D1 and self-developed AI supercomputer Dojo exapod released on AI day may respond to the above questions. The improvement of the performance of the automatic driving system requires a lot of data to be trained and optimized. Musk once said publicly that unless a company has strong AI ability and super computing power, it is difficult to solve the problem of automatic driving.
The supercomputer Dojo exapod integrates 120 training modules, built-in 3000 D1 chips, more than 1 million training nodes, and the computing power reaches 1.1 eflop (eflops: 10 billion floating-point operations per second). In Tesla’s words, “this is the world’s fastest AI training computer”. Tesla expects that the next generation of products will bring more than 10 times the improvement.
The D1 chip built in Dojo exapod is self-developed by Tesla and the core of Dojo exapod. D1 chip adopts 7 nm manufacturing process, and the single fp32 achieves 22.6 tops and bf16 362 tops. Tesla’s positioning for D1 chip is AI learning and calls it “pure learning machine”. In addition, the combination of multiple D1 chips can form a training module for large-scale calculation.
Musk said that due to the high cost of developing the system, it is unlikely to open source and self-developed AI chips, but he is open to licensing artificial intelligence technology to other automobile manufacturers. There is no doubt that the self-developed chip D1 and the supercomputer Dojo exapod will bring important help to the improvement of Tesla’s automatic driving technology. According to Musk’s plan, the dojo project will be put into operation next year.
Tesla has a “little egg” on its theme day every year, that is, “one more thing”. This year, Tesla BOT, a Tesla robot, is brought. The humanoid robot is 5 feet 8 inches tall and about 172cm; Weight 125 pounds, about 56.7kg; The bearing capacity is 45 pounds, about 20kg. Equipped with various software and hardware systems of Tesla. In order to achieve balance and agility, 40 electromechanical putters are used in the limbs.
“Tesla is much more than an electric car company,” Musk said. Musk did not disclose what specific tasks Tesla bot can meet, but said that the robot will liberate people from dangerous, repetitive and boring tasks, and plans to launch the prototype of the robot next year.
Controversial automatic driving
As a pioneer in the new energy vehicle industry, Tesla’s trend has always been the wind vane of the whole industry. And its foundation, automatic assisted driving technology, is also receiving praise and doubt again and again in many eyes.
On August 16 local time, the National Highway Traffic Safety Administration (NHTSA) said that after a series of crash accidents involving emergency vehicles, it had launched a formal safety investigation on Tesla’s autopilot system, covering about 765000 Tesla vehicles in the United States, It covers almost all models sold by Tesla in the United States since 2014, including models y, x, s and 3.
Regulatory pressure from China is also increasing. On Tesla AI day, the national Internet Information Office, the Ministry of public security and the Ministry of transport jointly issued several regulations on automobile data security management (Trial). On August 12 a few days ago, the Ministry of industry and information technology issued the opinions on strengthening the access management of intelligent networked automobile manufacturers and products.
This is not the first time Tesla has encountered autopilot related censorship.
According to foreign media reports, in May this year, the California motor vehicle administration said that Tesla called its automatic driving system FSD (Full Self-Driving) named “fully automatic driving” behavior suspected of false propaganda, and decided to conduct a review.
Tesla passed the previous regulatory investigation safely. The National Highway Traffic Safety Administration (NHTSA) conducted an investigation in 2020 and found no safety problems with Tesla models. 246 accidental acceleration accidents were caused by improper use of pedals. “There is no evidence that there are any faults in the accelerator pedal assembly, motor control system or braking system that led to the above events.”“ There is no evidence that design factors increase the likelihood of pedal misuse. ”
In January 2017, the National Highway Traffic Safety Administration (NHTSA) concluded a 7-month investigation on the autopilot function of Tesla’s automatic assisted driving system. The results show that there are no defects in the design or performance of autopilot, and there are no accidents that the system does not operate according to the design.
Tesla’s latest safety report for the first quarter of 2021 shows that in the driving activities participated in by autopilot auto assisted driving, an average traffic accident is reported every 6.74 million km. According to the latest data from the United States Highway Safety Administration (NHTSA), a collision accident occurs every 780000 km in the United States – 6.74 million km to 780000 km, which is only 1 / 10 of the accident rate of automatic assisted driving in Musk’s mouth.
While undergoing external review, Tesla’s automatic driving technology route is also changing.
Tesla was also a supporter of radar before turning to pure vision technology solutions. Musk said in 2016 that millimeter wave radar can accurately judge the distance between objects and vehicles, and is not afraid of rain, snow and haze. He believes that with millimeter wave radar, the vehicle sensing system can identify surrounding vehicles, pedestrians or obstacles earlier and more accurately, leaving the system time and distance to stop the vehicle.
The attention paid to millimeter wave radar comes from a car accident in May that year when a Tesla Model s owner enabled the autopilot function. Because the sensing system did not recognize a white semi-trailer in front, the automatic emergency braking function (AEB) did not work, and finally rushed into the bottom of the semi-trailer, and the owner died on the spot. After the accident, Tesla’s mainstream models were equipped with millimeter wave radar to strengthen the detection ability of the distance between objects and vehicles. However, subsequent accidents related to automatic driving still occur.
It is reported that in many accidents, Tesla vehicles directly crashed into stationary semi trailers, police cars, fire engines and other external vehicles. It is a common situation that no stationary vehicle is recognized, and it is also a common problem faced by L2 assisted driving vehicles.
In this case, when millimeter wave radar can not guarantee the absolute safety of vehicle operation, Tesla, which gives up radar and specializes in pure vision scheme, is inevitably questioned. However, the technical solution of pure vision is not groundless., If Tesla can really make the car’s camera the same as human perception and judgment, and the algorithm is powerful enough, in theory, all unmanned driving can be realized by visual scheme. The starting point is correct, but can machines really observe and think exactly like humans? Now the difficulty is: if we only rely on the camera, can we achieve the efficiency of the human eye, and whether the information obtained is enough to support the judgment.
Disputes coexist with sales. As of July 2021, Tesla has more than 1 million vehicles worldwide. With so much driving data, AI and Tesla with strong computing power pay more attention to every step towards the road of automatic driving.