Microsoft has released a new open framework to help AI resist hacker attacks


liu, tempo Date: 2021-09-15 11:23:33 From:ozmca.com
Views:71 Reply:0

Microsoft, in collaboration with mitre, IBM, NVIDIA and Bosch, has released a new open framework to help security analysts detect, respond to and remedy adversarial attacks against machine learning (ML) systems. The plan, called “attacker ml threat matrix”, aims to try to organize different technologies adopted by malicious opponents in subverting ml systems.

 

Just as artificial intelligence (AI) and machine learning have been deployed in various novel applications, threat actors can not only abuse this technology to enhance the ability of their malware, but also use this technology to deceive machine learning models with poisoned data sets, resulting in beneficial systems and making wrong decisions, And threaten the stability and security of AI applications.

 

In fact, ESET researchers found last year that emotet (a notorious e-mail based malware, which is behind botnet driven spam and blackmail attacks) is using ml to improve its targeting.

 

Then earlier this month, Microsoft warned of a new Android extortion software virus, including a machine learning model. Although the model has not been integrated into malware, it can be used to put the extortion note image into the mobile device screen without any distortion. In addition, the researchers also studied the so-called model inversion attack, in which access to the model is abused to infer information about training data.

 

hacker

 

According to the Gartner report quoted by Microsoft, by 2022, it is expected that 30% of all AI network attacks will use training data poisoning, model theft or antagonistic sample to attack machine learning powered systems.

 

Microsoft said: “despite these convincing reasons to ensure the security of machine learning system, Microsoft’s survey of 28 enterprises found that most industry practitioners have not dealt with antagonistic machine learning.” “among the 28 enterprises, 25 enterprises said they do not have appropriate tools to protect machine learning system.”

 

The adversarial ml threat matrix hopes to solve the threat of data weaponization through a series of vulnerabilities and opponent behaviors reviewed by Microsoft and miter. Microsoft and miter have evaluated the vulnerabilities and opponent behaviors to effectively combat the ML system. The idea is that the company can use the adversarial ml threat matrix to test the elasticity of its AI model by using a series of strategies to simulate real attack scenarios, so as to obtain initial access to the environment, execute unsafe ML model, pollute training data and disclose sensitive information, and steal attacks through the model.

 

Microsoft said: “the purpose of the confrontational machine learning threat matrix is to locate attacks on machine learning systems in a framework, and security analysts can locate themselves in these new and upcoming threats.”

 

“The matrix is structured like the att & CK framework because it is widely used in the security analyst community – so that security analysts do not have to learn new or different frameworks to understand the threats of ML systems.”

 

The development is the latest in a series of initiatives aimed at protecting AI from data poisoning and model avoidance attacks. It is worth noting that researchers at Johns Hopkins University have developed a framework called trojai to prevent Trojan horse attacks, in which the model is modified to respond to the input trigger that causes it to infer an incorrect response.

Leave a comment

You must Register or Login to post a comment.