Artificial intelligence has been making waves in the world of cybersecurity, as machine learning could potentially make the solutions we have today smarter and better at their intended jobs. However, artificial intelligence has also appeared on the other side of cybersecurity, as cybercriminals have begun to leverage A.I. as well.
This only makes sense. After all, a computer can work a lot faster than a hacker can, with a lot less of a chance of human error. Hackers have discovered this, and have put A.I. to work deploying phishing attacks. A study conducted by the security firm ZeroFOX in 2016 found that an AI that they programed, called SNAP_R, was able to send simulated spear-phishing tweets at a rate of 6.75 per minute, successfully tripping up 275 victims out of 800 targeted users. On the other hand, a staff writer from Forbes who participated in the study could only produce these tweets at a rate of 1.075 each minute, only fooling 49 out of a total of 129 attempts.
More recently, a team from IBM was able to create programs that use machine learning to create programs capable of making it past some of the best defenses out there.
This only shows that we’ll soon see malware that is powered by A.I., assuming it isn’t out there already and it just hasn’t been discovered yet.
IBM’s project, nicknamed DeepLocker, was able to demonstrate how a hacked videoconferencing software was able to activate itself when a target’s face was detected in a photograph. The lead researcher for the IBM team, Marc Ph. Stoecklin, called this kind of attack the next big thing, going on to say, “This may have happened already, and we will see it two or three years from now.”
Other researchers have also demonstrated how A.I. can be leveraged in an attack, going so far as to only use open-source tools intended for training purposes to do it.
What do you think? Are there already artificially intelligent attacks being played out, or do you think the big reveal is yet to come? Let us know what you think in the comments!
Comments