Will artificial intelligence be attacked by hackers? Should we fear or embrace?


liu, tempo Date: 2021-07-27 10:36:05 From:ozmca.com
Views:52 Reply:0

When it comes to artificial intelligence, people have a classic fear:

 

Once artificial intelligence develops, it will replace everyone’s work; if it continues to develop, it will eventually enslave mankind. ”

There is a short story that may solve their doubts:

In the city of London, there was a process of replacing horse-drawn carriages with cars that lasted for decades. The 100,000 coachman in London sighed all night, marching in the streets, worried that he would be unemployed. In the end, most of the coachmen became taxi drivers, and the end of the world did not come.

 

For the artificial intelligence threat theory, my judgment has always been: “Those who are worried about being robbed of their jobs by artificial intelligence will first be robbed of their jobs by others; those who are worried about being enslaved by artificial intelligence will first become vassals of other people.”

 

And what I want to discuss today is another more practical fear: will artificial intelligence itself have security problems?

 

In 2017, Baidu Factory Manager Li got the driverless car on the Fifth Ring Road and received a fine for violating traffic regulations. Although it was later proved that the divine operations of “combining solid lines” and “not turning on the turn signal” were all human drivers’ behaviors, it still made many people who eat melon feel a little thrilling.

 

hacker

 

In the future, artificial intelligence will help me drive and heal me. If someone is hacked to do something “stupid”, the consequences will be very serious. Feel free to open a brain hole:

 

• If the “brain” of autonomous driving is invaded, there will be a thousand more “wild horses” on the Fourth Ring Road in an instant, bringing the passengers in them to stage the drama of “Speed 8”.

 

• If the procedure of the “robotic doctor” of the automatic operation is messed up, it would have been prepared to perform a one-centimeter minimally invasive procedure, and the result will be a one-meter-long opening. . .
Will this happen?

 

1. Do tools that are inherently safe exist?

 

According to my routine, to answer whether artificial intelligence is safe, you must first define its attributes:

 

Whether it is a manned or unmanned vehicle, they are essentially “tools” like hammers and lighters. Because our understanding of the world is gradual, so the safety of tools is also gradually improved. As its “pros” gradually outweighs its “disadvantages”, this tool will gradually become popular.Give a few examples:

 

1. The United States enacted a law prohibiting drunk driving in 1910; it was only in 1968 that a law was enacted to force drivers to wear seat belts. In the process of automobile popularization, it has experienced a stage of slow decline in the frequency of accidents for a hundred years. Today is 90 points.

 

2. As recognized as the greatest tool in the world, the Internet has also experienced a decade of extremely low security. Around 2000, when the Internet became widespread, a series of viruses appeared, such as Code Red, Sasser, and Panda Burning Incense. It only takes more than ten hours for a virus to occupy computers all over the world, which is far more terrifying than the locust plague in 1942. After so many years of development, the current level of network security has probably reached 80 points.

 

3. Artificial intelligence will inevitably face the same fate. History will not repeat itself, but it will rhyme. Instead of praying for the perfection of artificial intelligence, it is better to study how to make it safe.

 

Speaking of this, we have to let today’s protagonist, the Internet giant Baidu, appear on stage. Baidu, which has “heavy-weight” artificial intelligence two years ago, has already led the way on this track. The world is quite fair: those who run fast will encounter the pit first. Therefore, Baidu has also accumulated a lot of experience in “artificial intelligence security”.

 

Not long ago, I met Ma Jie, General Manager of Baidu Security Business Department, and Han Zuli, General Manager of Baidu Security Products Department. After talking with them, I believe two things will happen:

 

1. The next large-scale popularized artificial intelligence product may come from Baidu;

 

2. With a large number of users, the problem will be exposed and cause widespread concern. Therefore, Baidu may also be the first to encounter and set out to solve the artificial intelligence security problem.

 

So, as far as I can see, what specific risks does artificial intelligence face?

 

2. What are the security issues of artificial intelligence?

 

Han Zuli divides the problems faced by artificial intelligence into five major categories.

 

1. Sensor deception

 

In the 2017 Awesome Hacking Contest, Xiao Huihui from Baidu Security Lab’s face printed on A4 paper deceived the face unlocking device; the same A4 paper can also be used to crack iris recognition after printing; or A4 paper can also crack the “finger vein recognition” technology used in the social security system.

 

Face recognition, iris recognition, fingerprint recognition, and finger vein recognition are actually based on an important branch of artificial intelligence: image recognition technology.

 

Xiao Huihui told me: “The principle of these identification technologies that everyone will use is to collect information through sensors and then enter the algorithm. In this way, as long as we know what these sensors need to “see”, we will fake one to show it. You can easily deceive it.”

 

This is also the first problem facing artificial intelligence in Han Zuli’s eyes: “sensor deception” .

 

A simple verification method, such as face recognition or iris recognition only, has bypass technologies. Therefore, the current solution in the industry is “multi-factor authentication”, which uses two or more authentication methods at the same time. To deceive the two authentication methods at the same time, the difficulty becomes much greater. He says.

 

2. Software defects

 

The famous TensorFlow is a machine learning system launched by Google. Everyone can use this platform to develop their own artificial intelligence applications.

 

But what if the architecture itself has loopholes?

 

Han Zuli said: “Tensor has 8.87 million lines of code. Not only that, to run Tensor, you also need to call about 100 dependent libraries, many of which are “old antiques” many years ago. So much code has exceeded the work of manual auditing. There must be a loophole in it.”

 

Take a look at the picture above, which is the vulnerability table of the major artificial intelligence platforms. Don’t assume, they are flawed.

 

For this kind of loopholes, it seems that there is no good way, which is to continuously audit and find problems. This is the responsibility of a security researcher, at least Baidu does this on its Paddle Paddle artificial intelligence platform.

 

3. Data poisoning

 

When it comes to data poisoning, it is the most thrilling part of AI offense and defense. It is also the one closest to people’s imagination.

 

Down Song, a well-known artificial intelligence expert at the University of California, Berkeley, can be regarded as a representative of “City Club”. She pasted a few pieces of adhesive tape on a sign that said “STOP”. Humans seem to be nothing at all, but from the perspective of the artificial intelligence of autonomous driving, this is a speed limit card with a speed of 45 kilometers per hour.

 

This seemingly insignificant interference by humans can fool the machine into recognition. The car cannot recognize the stop sign, and the consequences will be disastrous.

 

Imagine that if your self-driving car encounters such a sign, it will definitely rush over without hesitation, allowing you to experience the entire process from the ambulance to the hospital.

 

Han Zuli said that this is the standard “confrontational data. ”

 

There are many ways to play in the same way. For example, a team in Japan can add a pixel to each picture, which can make artificial intelligence recognition produce earth-shaking errors, directly pointing to the deer as a horse. (By the way, what they have done is the very technical Face++ system.)

 

For another example, if you owe a friend 200,000 and don’t plan to pay it back, you have to get a facelift. But if you want to make people face recognition and don’t recognize you, you only need this special luminous glasses. After wearing it, it will be recognized as another person by the system.
Although artificial intelligence is called artificial intelligence, in fact, the current technology does not reach the complete knowledge system of human beings. Therefore, many deception methods that are low in the eyes of people can easily “fall down” it.

 

What I just said is to fight against the already “adult” artificial intelligence. Some people have done even better. When artificial intelligence is trained on data, they directly mix in “toxic” data, so that the trained artificial intelligence has natural defects.

 

This is “data set poisoning”.

 

After all, today’s artificial intelligence is like a child, and children are often very easy to deceive.

 

Speaking of data poisoning, I thought of snake essence raising gourds with poison potion, and the seven babies that came out have this effect.

 

4. System security

 

The artificial intelligence system must run on top of the operating system. And these operating systems will be “white hats” found a large number of vulnerabilities every year, put a string of patches. The WannaCry virus that broke out in the previous stage suffered damage to unpatched computers.

 

Especially for much hardware with smart functions, if the operating system has loopholes and is not patched in time, it is likely to be controlled by hackers.

 

5. Cybersecurity

 

Similar to system security, network security is not a unique threat to artificial intelligence. As long as there is data transmitted on the network, there is a possibility of interception of the data by hackers.

 

You may feel that there are many angles and postures to get rid of a certain artificial intelligence. In fact, all security is like this: the defender must hold every inch of the brick wall, and the attacker only needs to find a point to break through. This is the difference between typical inside operations and outside operations. Unfortunately, because artificial intelligence is at the top of evolution, not only does it face many threats to others, it also faces unique threats.

 

3. Will Baidu’s artificial intelligence be attacked?

 

There are so many ways to attack artificial intelligence, but in most cases, these attacks will not happen in reality. Hackers attack a certain system, there is also a basic element, that is-motivation. In this day and age, the motive for doing bad things is generally money.

 

I have been emphasizing the concept of balance. Here is an example: when the user scale of a certain product reaches a certain level, and the balance point is exceeded, it has value in the eyes of the attacker. Therefore, what Baidu should focus on is those products that have or will have many users.

 

I think there are two artificial intelligence products that Baidu currently runs in the forefront, namely “DuerOS artificial intelligence operating system” and “Apollo unmanned driving system”. Sure enough, Ma Jie, general manager of Baidu’s security division, has also devoted a lot of attention to these two products.

 

1. Let’s talk about DuerOS first

 

DuerOS is an artificial intelligence operating system embedded in various intelligent hardware. For example, the Raven speakers and Fill headphones have DuerOS embedded in them.

 

Under normal circumstances, things like smart air purifiers, smart speakers, and housekeeping robots have many sensors, such as sound collection and video collection, and they are usually placed in the living room or even the bedroom. If they are controlled by hackers, uploading a little sound or video is really very embarrassing. . .

 

But to prevent these things from happening, it is far from enough to only ensure the security of DuerOS itself.

 

DuerOS already has four-digit partners, but these hardware are often produced by partners. In principle, Baidu is only an artificial intelligence system provider and cannot control the hardware security or operating system security of its partners.

But any equipment problems will hurt Baidu’s brand. Therefore, we must try our best to help our partners do a good job in other security aspects, Ma Jie said.

 

What did they do to improperly be a man of the pot?

 

Everyone who organizes a team to fight monsters knows that to solve the things they can’t handle, they generally need an “alliance.” After more than half a year of brewing, Baidu finally led the launch of the “AI Alliance” at the end of 2017.

 

It is mainly advocating a set of technical standards, including: secure Linux kernel, patch hot fix technology, secure communication protocol, identity authentication mechanism, and automatic attack interception five technologies. This almost covers the “Five Threats to Artificial Intelligence” mentioned in the second part of the article. Many of these technologies are the favorites of Baidu’s chief security scientist Wei Tao’s team “X-team”. (If you want to know more about the great god Wei Tao, you can check out my previous article “Wei Tao: From Memory War to Black Product War” )

 

There are a lot of new words in this passage. A simple summary is that Baidu has come up with a complete set of artificial intelligence security solutions, which can be included with DuerOS. In view of the generally low security capabilities of smart hardware manufacturers, it is not appropriate to say that the bonus is not appropriate. In Ma Jie’s words, it is called a “mandatory bonus”-as long as you use DuerOS, you must adopt a security solution certified by the AI Alliance. This saves time and effort, and everyone is happy.

 

From this perspective, when using DuerOS hardware, security is guaranteed by the bottom line.

 

2. Let’s talk about Apollo

 

Li Yanhong said that the time for mass production of unmanned vehicles through the Apollo platform is 2019.

 

At present, Apollo, as an open source autonomous driving system, is iterating step by step (if you understand the code, you can go to Github to monitor their progress), and before each new version is released, it is the season for Baidu Security students to stay up late. It’s because they are responsible for reviewing the security of the code and looking for vulnerabilities in it.

 

This is “software security”.

 

Everyone likes old drivers because they are “stable”. In the world of autonomous driving, the role of artificial intelligence is the driver. If Apollo is not “stable”, how to live in the autumn mountains? To be honest, the importance of driverless safety is more important than the “upload recording” purifier in the bedroom.

 

Autonomous vehicles are recognized as the first battleground for artificial intelligence. From a scientific perspective, no matter which company’s unmanned vehicle is, only the more popular it is, the more likely it is that safety problems will be exposed.

 

At this stage, even Baidu’s unmanned vehicles, the top echelon in the industry, are under development. It is a bit divorced from reality to directly fill in the scenes in “Fast and Furious 8”. If you ask me what specific attack postures the artificial intelligence in the car will face, the most scientific answer is: I don’t know yet.

 

It can be said that the challenge is greater for the safety of unmanned vehicles, but the time window has not yet arrived.

 

4. What posture should we use to fear?

 

As mentioned earlier, many specific risks of artificial intelligence may not yet be known. The unknown nature will cause fear.

 

Should we condone our fears?

 

At the beginning of the article, I mentioned the story of cars replacing horse-drawn carriages. In fact, there are more details in this story:

 

The car was just invented 150 years ago, and it was considered a monster.

 

At that time, there were horse-drawn carriages running all over the city of London, and a car suddenly drove out from the side, which often challenged the fragile heart of the horse. The frightened horse caused many accidents. The safety of the car itself is also worrying. I opened a British newspaper and a cartoon came into view: the car exploded, and the people in the car were flying.

 

People would not agree to replace the 100,000 elegant and cute horses in London with this dangerous thing.

 

Hearing the voice of the masses, the United Kingdom implemented the “Red Flag Act” in 1858, stipulating that the speed limit for cars in the suburbs is 4 yards, and the speed limit in the city is 2 yards (you read it right, it’s slower than walking). In front of the car, there is a dedicated person walking and holding a red flag, reminding everyone that “the danger is near.

 

The “Red Flag Law” was promulgated before the sights of the streets of London.

 

You see, ordinary cars that can be seen everywhere today were once treated as scourges. People’s fear at that time seemed to be a joke in the eyes of future generations.

 

With the implementation of technology, continuous follow-up research is the most responsible attitude towards artificial intelligence safety. As Ma Jie said: “The most terrible problems come from not being aware of it. As long as you are aware, you will eventually be able to solve it.”

 

In addition to Baidu Security’s classmates, there are also many white hats in the front door of BSRC (Baidu Security Emergency Response Center) who are tirelessly helping Baidu to find security problems. Therefore, cooperation with White Hat is also an important part of Baidu’s artificial intelligence security.

 

To be honest, even for professional hackers like White Hat, artificial intelligence is a relatively unfamiliar technology. To study the security of AI, you first need to understand it.

 

In Ma Jie’s mind, security personnel have a “skills chart.” Various basic skills are essential to find loopholes. For example, CPU architecture and system kernel are all necessary skills for high-ranking white hats. And artificial intelligence is likely to become the next focus in the “skills table.”

 

“Whether you use artificial intelligence to find loopholes or find loopholes in artificial intelligence, you must first have a full understanding of it. And this kind of learning, the sooner you start, the better.” He said.

 

At the just-concluded BSRC annual ceremony, Baidu Security also awarded awards to outstanding white hats. It is said that Baidu’s year-end bonus for white hats this year has reached one million yuan, which is quite a lot.

 

Speaking of new technologies, our fear may not change anything in the end, just like a “dangerous” car finally walks into everyone’s garage:

 

At the end of the 19th century, Karl Benz invented the internal combustion engine car. Once he drove local officials on the road. Due to legal restrictions similar to the “Red Flag Law”, German cars cannot exceed 6 kilometers, so they can only move forward at turtle speed.

 

Suddenly a carriage behind passed by, and the coachman turned around and laughed at them loudly.

The official was furious and shouted to Benz: “Catch me up!”

Benz pretended to be helpless: “But the government has regulations…”

“Don’t worry about the rules! I am the rules! Chase!” the official shouted.

Hearing this, Benz stepped on the accelerator. Their car immediately surpassed the carriage and has never been overtaken since.

What Ma Jie said is nothing more than four words:

Instead of fear, embrace.

Leave a comment

You must Register or Login to post a comment.