Believe it or not, artificial intelligence will become a hacker’s magic weapon


liu, tempo Date: 2021-07-26 10:25:22 From:ozmca.com
Views:87 Reply:0

Machine learning is defined as “the ability of (computers) to learn without being explicitly programmed”, and it will also have a huge impact on the information security industry. This is a potential technology that can help security analysts, from malware and log analysis to earlier identification and repair of vulnerabilities. Perhaps, it can also improve terminal security, automate repetitive tasks, and even reduce the possibility of attacks caused by data filtering.

 

But the problem is that hackers also know this and expect to build their own artificial intelligence and machine learning tools to launch attacks.

 

How would hackers use machine learning?

 

hacker

 

These criminals-more and more organizations and more and more services provided on the Internet-the ultimate speed of innovation may exceed the speed of security defense. This is taking into account the untapped potential of technologies such as machine learning and deep learning.

 

“We must recognize that although technologies such as machine learning, deep learning, and artificial intelligence will be the cornerstones of future cyber defenses, our opponents are also working hard to use these technologies to implement innovation.” McAfee Chief Technology Officer – Steve Grobman said in comments to the media. “As often happens in the field of cybersecurity, technologically enhanced artificial intelligence will become a winning factor in the arms race between attackers and defenders.”

 

Attacks based on machine learning may still be unheard of at present, but in fact, some technologies have begun to be used by criminal groups.

 

1. Malware evades detection

 

The creation of malware is largely done manually by cybercriminals. They write scripts to compose computer viruses and Trojan horses, and use rootkits, password grabbers and other tools to help distribute and execute them.

But what if they can speed up the process? Can machine learning help create malware?

 

The first example created using machine learning malware was a paper published in 2017 entitled “Malware Examples for Generating GAN-based Black Box Attacks”. In the report, the authors revealed how they built a Generative Adversarial Network (GAN) algorithm to generate adversarial malware samples. The key is to bypass the machine learning detection system.

 

In another example, at the 2017 DEFCON conference, the security company Endgame disclosed how it used Elon Musk’s OpenAI framework to create custom malware to create malware that the security engine cannot detect. Endgame’s research is based on binary files that appear to be malicious, and by changing some parts, these codes appear to be benign and trustworthy in the anti-virus engine.

 

At the same time, other researchers predict that machine learning will eventually be used to “modify code based on detection methods in the laboratory,” which is an extension of polymorphic malware.

 

2. Intelligent botnet for scalable attacks.

 

Security company, Fortinet, believes that 2018 will be the year of “Hivenets” and “Swarmbots”, which essentially signifies that “smart” IoT devices can be ordered to carry out large-scale attacks on vulnerable systems. “They will be able to communicate with each other and take action based on shared local information.” Fortinet global security strategist Derek Manky (Derek Manky) said. “In addition,’zombies’ will also become smarter and be able to act without the guidance of’botnet shepherds’. As a result, the cellular network will be able to grow exponentially, expanding its ability to attack victims at the same time and significantly hinder mitigation and response.”

 

Interestingly, Manky said that these attacks have not yet used swarming technology, which may enable these cellular networks to learn from past behavior. A branch of artificial intelligence, cluster technology is defined as “the collective behavior of a decentralized, self-organizing system, natural or artificial,” and is now used in drones and emerging robotic devices.

 

3. Advanced spear phishing becomes smarter

 

A more obvious application of adversarial machine learning is the use of algorithms such as text-to-speech, speech recognition, and natural language processing (NLP) for smarter social engineering. After all, through repeated use of neural networks, you can already give this software a certain writing style, so in theory, phishing emails may become more complex and credible.

 

In particular, machine learning can make advanced spear phishing targeted at well-known personalities while automating the entire process. The system can be trained on real emails and learn to do something that seems convincing.

 

In McAfee Labs’ forecast for 2017, the company stated that criminals will increasingly use machine learning to analyze a large number of private records to identify potential victims and establish background details that can effectively target these people by

Leave a comment

You must Register or Login to post a comment.