AI in Cybersecurity: How Hackers Are Helping Machines
Sébastien Goutal
—April 17, 2019
—4 min read

Artificial intelligence (AI) is everywhere. Or is it? According to MIT Sloan Management Review, only one in 20 companies has extensively incorporated AI into their solutions. These numbers are surprising considering it seems every company claims to use AI. In many cases, this is little more than “AI-washing” of existing technologies to jump on the AI bandwagon. Still, cybersecurity companies are using AI in meaningful ways outside the marketing department.
AI in Email Security
For most of the email security market, AI is fairly new. Traditional products, including Secure Email Gateways (SEG), are stuck in the past and continue to rely on reputation and fingerprint-based methods of threat detection, including IP and URL blacklisting. In the IDC Technology Spotlight, sponsored by Vade Secure, “ New Email Paradigm Requires New Security Approaches,” research analyst at IDC European Security Solutions, Konstantin Rychkov says, “with greater adoption of cloud and a proliferation of targeted attacks, the shortcomings of SEG have become obvious.” As hackers continue to improve their attacks methods, email security solutions must be predictive in their approach to threat detection. AI makes it possible.
The Model Student
Machine learning, one of the most promising subsets of AI in cybersecurity today, draws on enormous volumes of data to classify and cluster emails. With this data, it creates new rules enabling real-time threat detection. Leveraging both trained (supervised) and autonomous (unsupervised) algorithms, machine learning is used to identify unknown email threats that have not been identified or added to a blacklist:
Supervised algorithms
In Supervised Learning, an algorithm is trained to classify emails based on messages that have been labeled as malicious or legitimate. The machine learning algorithm computes features from an email, URL, or attachment and is continually trained with new data to deliver a verdict, whether phishing, spear phishing, or malware. Examples of features include email structures, obfuscation techniques, URL redirects, and webpage structures.
While some cybersecurity vendors claim to analyze thousands of features, the quality of the features is more important than the quantity. For instance, Vade Secure uses Recursive Feature Elimination to determine the optimal number of features for its machine learning algorithms; adding more features doesn’t materially improve the accuracy of the model. To detect phishing URLs and web pages, we currently analyze 47 features, though this number could change over time as attacks continue to evolve.
Human expertise is required to label the data, train and monitor the algorithms, and hone the precision of results. Machine learning models must be continually trained with new threat intelligence. Ideally, because new threats emerge daily in every language and in every country, the algorithm should ingest global data from inboxes around the world.
Unsupervised algorithms
Unsupervised Learning algorithms use clustering to recognize patterns, find correlations, and detect anomalies in emails. The unsupervised algorithm learns to recognize similarities in emails and then groups and labels them. Because phishing, spear phishing, and malware change constantly, the unsupervised algorithm analyzes behaviors—the content and context of emails—to identify and block threats that are not already known to the algorithm.
A spear phishing email, for example, doesn’t include a link for the algorithm to scan. Instead, Natural Language Processing will look for certain behaviors identified in previous threats, such as urgency or flag words and phrases, to identify a pattern of abuse. Additionally, unsupervised algorithms can detect anomalies, such as sender email addresses that do not match those in the organization’s entity model, which could suggest display name spoofing or cousin domains.
Deep Learning and Computer Vision
In both of the above examples, machine learning models focus primarily on text analysis. However, phishers are known to alter the HTML code in phishing emails or web pages, making subtle alterations to colors and other visual elements that are imperceptible to humans but fool signature-based defenses that can only identify an exact match. To address this, Deep Learning models already trained in image recognition are repurposed with Transfer Learning and trained to recognize logos and other images used to impersonate well-known brands like Microsoft, PayPal, and Bank of America. Looking for patterns in images, including color, tone, and shape, the computer vision model adds an additional layer of protection to machine learning models that identify only text elements.
Together, Supervised and Unsupervised Learning represent a STAP (specialized threat analysis and protection) solution—comprehensive, AI-based threat detection unmatched by reputation and fingerprint-based detection solutions. “STAP,” says Rychkov, “is typically a necessary component when protecting against spear phishing, ransomware, and whaling through behavior analysis, computer vision, anomaly detection, time-of-click URL, and page exploration and other methods.”
Humans and AI Work Better Together
Machine learning offers the ability to detect unknown attacks and add new rules to the algorithms to continually improve itself. To be effective, the algorithms must be continually trained with new data. Administrators supervise the algorithms to ensure continuous learning and accurate results, while users contribute to the feedback loop. When emails are reported as malicious or marked as spam or junk, that data feeds into the machine learning algorithms and provides them with new data, helping to evolve the algorithms, improve their efficacy, and create automated rules for remediation. User feedback is also continuously vetted to avoid data poisoning of the machine learning models.
These automated threat-detection and remediation capabilities provided by machine learning also lends a helping hand to time-strapped and understaffed IT teams. Manually investigating and remediating cybersecurity threats is labor intensive, and many security professionals spend a considerable amount of time investigating false positives that have been improperly flagged by email filters. This results in alert fatigue, or desensitization to security alerts. According to a 2018 report by Bitdefender, 72 percent of information security professionals admit to having alert fatigue, which can lead them to ignore true cybersecurity threats and breaches.
“The shortage of qualified security personnel,” says Rychkov, “and time consuming nature of combating the impact of email threats increases demand for automated reporting and remediation features in email security to alleviate some of the burden.” Machine learning algorithms have the potential to reduce both the amount of false positives and the amount of time spent remediating threats that traditional email filters did not catch. When admins and users report false positives and false negatives, the models learn from their mistakes and improve themselves moving forward.
Limitations and Best Practices
No security solution can block 100 percent of email threats. As we’ve seen from some high-profile examples, algorithms make mistakes, and they ultimately rely on humans to teach them right from wrong. If an algorithm misses an undesirable email, end users should be trained to report them so the algorithm can learn from the mistake. “As warnings are captured,” says Rychkov, “the ML monitors the admin and end-user response, enabling it to learn about specific behaviors from typical users, customizing the model for the organization’s workflow.”
AI in cybersecurity is still in its infancy, but its ability to retain information, learn new skills, and make decisions is unmatched. Humans supply the data, and algorithms will learn from the best—and the worst—that we have to offer.