Everyone can change his face and see is no longer true: how to ensure the safety of artificial intelligence?


liu, tempo Date: 2021-09-17 10:53:23 From:ozmca.com
Views:32 Reply:0

2021 artificial intelligence security: the committee members and experts held consultations on “scientific and technological ethics and legal issues in the development of artificial intelligence”, and believed that artificial intelligence technology has extensive penetration and subversion, which brings more complex and diverse scientific and technological ethics and legal issues, which may lead to potential ethical and legal risks, which should be paid great attention to. The security of artificial intelligence has attracted attention again.

 

What do you know about AI security? Let’s have a look

Artificial intelligence security technology risk

The inherent characteristics of artificial intelligence algorithms, especially deep neural networks, such as poor interpretability, high-dimensional linearity and large amount of dependence on data, mean that artificial intelligence systems have some inevitable security defects and are vulnerable to specific attacks. Therefore, in some really important key fields, such as automatic driving, financial business decision-making and medical diagnosis, the further development and application of artificial intelligence still face great problems.

 

The algorithm has technical vulnerability

At present, artificial intelligence is still in the “feeding” stage of massive data. The artificial intelligence algorithm represented by deep neural network still has weak robustness (fragile, unsafe and unstable), non interpretability (black box characteristics of the algorithm, unable to fully understand the artificial intelligence system in the application process), bias and discrimination (racial discrimination, prejudice between the rich and the poor in foreign artificial intelligence systems) and other technological limits that have not been overcome.

 

New types of security attacks are emerging

In recent years, it has resisted sample attacks (the attacker adds subtle “interference” on the input of artificial intelligence model that is difficult to be recognized by human senses, so that the model can accept and make wrong classification decisions), algorithm backdoor attacks (inserting backdoors into the training data of the model to make it sensitive to specific signals and induce it to produce the wrong behavior specified by the attacker) , model theft attack, model reverse attack and other new security attacks that destroy artificial intelligence algorithms and data confidentiality, integrity and availability have emerged rapidly.

 

The algorithm design and implementation deviate from the expectation

The design and implementation of artificial intelligence algorithm may fail to achieve the preset goal of the designer, resulting in uncontrollable behavior that deviates from the expectation. For example, the designer defines the wrong objective function for the algorithm, resulting in adverse impact on the surrounding environment when the algorithm performs tasks.

 

2021 artificial intelligence security

 

Artificial intelligence security and social risk

With the continuous implementation of various artificial intelligence application scenarios, once there are problems such as data and algorithm leakage, abuse and misuse, it will bring a series of unknown risks and hidden dangers.

 

Personal information security

Artificial intelligence technology has strengthened the ability of data mining and analysis. The collection of personal information has shown a trend of accuracy, comprehensiveness, simplicity and concealment. With the collection, analysis and utilization of information with strong personal attributes such as face, fingerprint, voiceprint, iris, heartbeat and gene, and the increase of application scenarios, once it is leaked or abused, it will have serious consequences. For example, there are many problems After criminals sell face information on e-commerce platforms and package it for sale at a low price of 50 cents, the stolen face information is used for illegal and criminal activities such as false registration and telecom network fraud.

 

Social security

From the development around AI chips, cutting-edge algorithms, unmanned driving, intelligent robots, AI + 5g and other industries to the application of AI + education, AI + finance, AI + medical treatment, AI + industry, intelligent driving and other scenarios, Ai Ai artificial intelligence has penetrated into all fields. However, once there is “data poisoning” , algorithm errors and the abuse of artificial intelligence, such as deep forgery technology, may pose a threat to ethics, social economy and so on.

 

For example, in automatic driving, after carefully designed noise disturbance to an entity parking sign, the target detection system can not accurately identify the sign, which may cause traffic accidents.

 

Furthermore, deep forgery technology, such as criminals using this technology to produce high fidelity audio and video of serious violent crimes such as murder, kidnapping and explosion for the purpose of disturbing social security or extortion, will pose a serious threat to social stability.

 

national security

If artificial intelligence technology is abused, it may provide new tools for separatist and terrorist forces at home and abroad. It is very likely to be used to discredit national leaders, incite terror and violence, and undermine national political stability.

 

Artificial intelligence security legislative measures

Standardize legislation and formulate standards

The rapid development of artificial intelligence has also brought unprecedented challenges to the current legal rules, social order, moral standards and public management system. Therefore, it is necessary to improve legislation and policies at the national level, arrange in advance and respond carefully. At the same time, industry associations and intermediary organizations are encouraged to formulate artificial intelligence industry standards, technical standards and industry norms, and guide relevant enterprises Formulate ethical rules and strengthen the responsibility of enterprises to protect personal information and privacy.

 

Technical response, security defense

To maintain AI security, the importance of technical means is self-evident. First of all, we should pay attention to strengthening the robustness of deep neural network itself, that is, we should make full use of technical means such as knowledge distillation and antagonistic training to strengthen its ability to resist antagonistic samples. Second, we should promote the development of intelligent screening and interception technology of antagonistic samples in combination with a variety of discrimination means to make the attack process “invisible” Finally, the attack weakening strategy can be used to make defensive disturbance to all samples to be tested to reduce the destructive ability hidden in them.

 

Ethical constraints, AI, good

The biggest weakness of artificial intelligence is the lack of direct perception and weakness in value judgment. Relevant state departments and industry organizations should formulate ethical principles and frameworks for artificial intelligence and implement them in the R & D and application of artificial intelligence, strengthen scientific and technological ethics and legal training for scientific researchers, guide the whole society to correctly understand and apply artificial intelligence and ensure the healthy development of artificial intelligence At the same time, we will strengthen international exchanges and dialogue and actively participate in the formulation of global artificial intelligence ethical rules.

Leave a comment

You must Register or Login to post a comment.