Artificial intelligence technology is becoming more advanced. In the past, we could not imagine that machines can talk to us, or even help us accomplish some tasks that can only be done by human beings. However, it is clear to us that as technology becomes more and more intelligent, artificial intelligence will also bring threats to human beings. In the future, we may find hackers in some cases using ARTIFICIAL intelligence to launch more sophisticated attacks on our computer systems. However, Jason Hong, associate professor at Carnegie Mellon University’s Institute of Computer Science, believes these attacks will not be as powerful as the AI defense systems depicted in the terminator movies, or as advanced as those depicted in HBO’s sci-fi series Westworld.
Hong believes ai gets a bad name because humans let their imaginations get out of control, and attributes this behavior to technology not actually doing so. However, ai will not always be used correctly in the future, and we need to worry about who chooses to abuse it rather than use it properly.
“Over the next year we expect to see malware with adaptive designs that can improve attack effectiveness based on successful learning. This new generation of malware will be context-aware, meaning it will understand its environment and figure out what to do next. In many ways, it will begin to do what human hackers do: perform reconnaissance, identify targets, select attacks and cleverly evade detection.” Derek Manky, a global security strategist.
Manky believes the malware will use ai code precursors. It will replace the traditional “if it’s not it, it’s it” code logic with a more complex decision logic. “Autonomous malware operations are more like branch prediction techniques, which can guess which branch of the Decision tree something will be processed before performing a task. The branch predictor records whether a branch has been taken. The software will become more effective over time.” Manky said.
Hong sees an emerging field coming up against machine learning, where hackers try to reverse engineer software technologies to make them work. For example, they are looking for new ways to access old spam filters or ways to poison data specifications, so that data owners start training their machine learning systems in the direction of bad data and get the machines to make harmful judgments.
However, it is important to remember that artificial intelligence systems are always created by humans within the circle. Not many systems are fully automated, as its side effects are unknown to humans.
“In the future, AI in cyber security will continue to adapt to more and more attacks. Now, humans are connecting data, sharing data and applying data to systems. However, we are only telling machines what to do, and in the future, mature AI systems will be able to make their own judgments.” Manky said. “In the future, more complex decisions will be taken away by AI. Humans and machines must coexist.”
One fear is that artificial intelligence will do more good than harm in the future. There are bigger issues for security experts to deal with. These AI techniques only work within very complex and narrow ranges, and once you go beyond that range, they don’t work very well. Imagine the AI is playing a game of chess and you change the game to checkers, of course it doesn’t work very well.”
Instead, Hong believes organizations should worry about security issues such as data breaches, weak passwords, configuration errors and phishing attacks. “I would say humans need to focus more on these really basic types of security issues and not worry about the really complex ones, which will come eventually, but we have a lot of time to get used to these systems.” He said.