Email Security

AI in Cybersecurity: Automating the Path to Higher MSP Margins

Adrien Gendre

August 19, 2021

3 min

Artificial intelligence (AI) in cybersecurity is not new, yet there remains a certain mystery surrounding the use of AI in cybersecurity. Images of cyborgs and deeply complex definitions give AI a sci-fi reputation that makes it feel otherworldly, but the real-world applications may be less complex and more practical than you think. In this post, we’ll discuss the applications of AI in cybersecurity and explain why automating cybersecurity with AI is the path to higher MSP margins.

Examples of AI in cybersecurity

One of the most significant ways that AI has contributed to the field of cybersecurity in the last five years is email security. While news headlines in the last 12 months have focused on highly sophisticated attacks, including the SolarWinds supply chain attack and Microsoft Exchange hack, the most common path to a cybersecurity breach is through email.

Email security

With only a few lines of text and a click of the Send button, a cybercriminal can cause devastating harm to a business. From breaching a system through a phishing email, to weaponizing an email attachment with ransomware, to manipulating an employee into diverting millions of funds to a fraudulent bank account, a simple email is not so simple when crafted by a criminal. Blocking the simplest of emails, however, is more difficult than it seems.

Email filtering technology has long relied on signature scanning to identify threat actors. But hackers have learned to hide behind good reputations, abusing high-reputation domains like Salesforce and OneDrive to launch attacks. AI technologies have been applied to solve, or at least largely mitigate, this issue. Here are some examples:

  • Machine Learning: Supervised Learning models are trained to recognize the signs of phishing by studying a large dataset of phishing samples and legitimate emails. Those models then analyze a specified number of email features to render a verdict.
  • Anomaly Detection: Unsupervised Machine Learning models scan for rare events that qualify as anomalies. An example of an anomaly is when a user receives an email from a new sender whose email address closely matches a known contact from within the organization (this is a sign of email spoofing).
  • Natural Language Processing (NLP): NLP models examine text for suspicious words and phrases that could indicate a threat. For example, both phishing and spear phishing emails often include language that indicates urgency or is intended to cause alarm, fear, or concern.
  • Computer Vision: This application of AI in cybersecurity comes in response to the emergence of images as obfuscation tools in phishing emails. For example, hackers warp brand logos to conceal signatures, host images remotely to avoid detection, use QR codes to hide phishing URLs, and conceal suspicious text behind images. All these techniques, and more, can be seen by Computer Vision models.

Security awareness training

Unlike sleepy security awareness training sessions led by human trainers, AI-based security awareness training is contextualized at the user level and delivered automatically—without human intervention or trainers. Also known as adaptive training, automated training supported by AI delivers training content at the exact moment the user shows weakness and generates training content relative to the context of the incident.

As an example, if a user experiences a phishing incident (clicks a link on a phishing email), the AI recognizes the action and triggers a training session that teaches the user to recognize phishing techniques. The training content changes according to the type of attack, and the AI is able to dig into its database to find samples that fit the context of the situation. This method reduces the need for intervention on the part of IT and captures the user’s attention immediately, as opposed to occasionally during mandatory live training sessions or phishing simulations.

[Webinar Replay]: Threat Coach: Automating User Awareness Training in Microsoft 365

Incident response

Intelligent incident response systems aided by AI can lead to better detection and response times for IT teams and a reduction in the false positives that lead to alert fatigue. One example of AI in incident response is continual scanning of networks for anomalies, as well as incidents that the AI might have missed on the first pass.

A real-world example: An email initially bypasses the AI because it includes a link to a legitimate (safe) webpage. Moments after delivery, a cybercriminal redirects the URL to a phishing page (time-bombing). With continual scanning, the AI can detect that dangerous link in an inbox, then remove the threat automatically.

Cybersecurity automation: The path to better MSP margins

Hunting threats. Remediating incidents. Configuring and reconfiguring complex solutions. All these tasks and more eat into an MSP’s margins. AI in cybersecurity delivers the automation capabilities that MSPs need to deliver value to more customers without adding to their workload or headcount.

From continuous scanning of email and networks to automated incident response, AI technologies take the onus off the MSP and place it on the AI, allowing the MSP to spend more time managing their business and less time on manual tasks. 

To learn how Vade can help you automate cybersecurity with AI, ask for a demo of Vade for M365.