Project Management Institute

AI vs. Hackers

Cybersecurity Teams Integrate Artificial Intelligence to Neutralize Threats

img

JIRSAK/ISTOCKPHOTO

The annual cost of cyberattacks is expected to increase from US$3 trillion in 2015 to US$6 trillion by 2021.

With cybersecurity, the best defense might just be automated. As hackers grow more sophisticated, cybersecurity teams are increasingly incorporating adaptive tools built around artificial intelligence (AI) and machine learning technology.

The need is apparent: According to Cybersecurity Ventures, the annual cost of cyberattacks is expected to increase from US$3 trillion in 2015 to US$6 trillion by 2021. Enter AI-backed tools, which can be designed to constantly look for threatening patterns without much human intervention, says Humayun Zafar, PhD, associate professor of information security and assurance, Kennesaw State University, Kennesaw, Georgia, USA. “Most attacks are automated, so you need an automated tool to find them.”

DeepArmor, a project to develop the first AI-powered “cognitive” antivirus system, is one of the most well-known current efforts. Led by security software firm SparkCognition, DeepArmor leverages neural networks, machine learning analytics and natural language processing to identify files that appear malicious and remove them from a system. This approach enables the system to adapt as it learns about new types of attacks, which can prevent zero-day threats—when hackers find and exploit a software flaw that is unknown to the vendor.

“Traditional virus detection programs can't easily identify zero-day attacks,” Dr. Zafar says. “So this a great example of the role AI can play in cybersecurity.”

The Massachusetts Institute of Technology (MIT) and AI developer PatternEx are partnering on a similar project to create an AI platform called AI2 that predicts cyberattacks by analyzing massive amounts of data and incorporating input from human experts. In an article from MIT's Computer Science and Artificial Intelligence Laboratory about the project, research scientist Kalyan Veeramachaneni, who co-developed AI2, likens it to a virtual analyst. “It continuously generates new models that it can refine in as little as a few hours, meaning it can improve its detection rates significantly and rapidly,” he says.

image

“Most attacks are automated, so you need an automated tool to find them.”

—Humayun Zafar, PhD, Kennesaw State University, Kennesaw, Georgia, USA

But AI alone can't defeat hackers. The combination of humans and automated systems is vital, Dr. Zafar says. While AI technology can automate scanning for malicious events, humans still need to determine whether the threat is real. That presents a big challenge for project teams developing new software.

Typical cybersecurity software generates a lot of false positives, which can cause users to develop “alert fatigue” and stop paying attention, he says. But developers struggle to eliminate false positives because the nature of the attacks is constantly evolving. “It's hard to eliminate a problem when you don't know what it looks like.”

Teams must also be able to prove their technology works, which can be more difficult than it sounds. “Traditional project development testing practices fall short when it comes to an in-depth analysis of issues pertaining to software assurance,” Dr. Zafar says. “And the last thing you want to do is test your software with the same people who built it.”

He encourages project teams to employ professional ethical hackers to test systems, or to crowdsource testing by offering “bug bounties,” which incentivize people to hack a piece of software to intentionally uncover its flaws. The practice is common in the industry. For example, the U.S. federal government's Defense Advanced Research Projects Agency hosted a bug bounty challenge at the 2016 Def Con Hacking Conference. Competing teams were challenged to create and use an autonomous system to identify software flaws while protecting hosts and maintaining the software's correct function. The winning team received US$2 million in prize money.

“Bug bounties are a great way to find flaws in your system,” Dr. Zafar says. “It can also be a great way to find new talent for your team.” —Sarah Fister Gale

This material has been reproduced with the permission of the copyright owner. Unauthorized reproduction of this material is strictly prohibited. For permission to reproduce this material, please contact PMI.

Advertisement

Advertisement

Related Content

Advertisement