4 Ways Hackers Break Through Fingerprint and Reputation Defenses
October 31, 2019—
3 min read
Email security remains a top concern for organizations around the world. Ransomware, which had been waning in recent years, is back. In 2019, ransomware shut down entire city governments across the US, while Europe saw large-scale ransomware attacks across the manufacturing and health care industries.
According to Datto’s European State of the Channel Ransomware report, managed service providers (MSPs) reported that phishing was the #1 delivery method for ransomware. Of the MSPs who reported ransomware in SaaS applications, 49 percent reported ransomware infections in Office 365.
While businesses and governments scramble to recover from ransomware attacks, including paying six-figure ransoms, hackers are improving their phishing techniques and bypassing standard email security filters. Here are a few of the techniques they use to do it:
1. IP and reputation abuse
Most traditional email security filters, including secure email gateways, are designed to recognize abusive email senders by their IP addresses and domains. Known email threats are added to a blacklist of IPs, domains, and URLs and blocked from reaching user mailboxes.
Hackers have learned to bypass these protection methods with “snowshoe attacks,” or abusing trustworthy IPs by sending phishing messages across a range of IP addresses at low volumes to evade detection. To avoid domain-name blacklisting, cybercriminals dynamically create domains and hijack domains that have a good reputation, hacking legitimate websites and using the domain names without website owners’ knowledge.
2. Fingerprint obfuscation
Most criminals—whether they want to or not—leave a signature or “fingerprint”. In the case of phishing, the fingerprint includes unique identifiers of the email, such as the header, subject line, body, or footer. When a phishing email is known to an email filter, it’s easy to detect and block. Cybercriminals bypass this protection by obfuscating their fingerprints.
One example of this is by mixing legitimate and illegitimate content in the body of the email. For example, the hacker stuffs the email with links to legitimate Microsoft webpages. The email filter begins the scan, detects the clean URLs, and renders the email safe. Other examples of this include encoding the known phishing email with random characters or HTML attributes. In this case, because the code is slightly altered from the known phishing fingerprint, it can—and does—bypass the filter.
This type of obfuscation is especially frustrating to email recipients who have already reported an email as phishing. The user recognizes that they’re receiving the same email they reported only days before, but the email filter does not. Visually, the email is the same, but the code of the email—which users don’t see—has changed, making it unique to a filter that cannot see an email like a human can.
3. Text analysis obfuscation
In another example of fingerprint obfuscation, a phishing email may have little or no content to scan, which confuses the filter and bypasses text analysis. For example, in some phishing emails, the body of the phishing email is an image, rather than text, although this isn’t visible to the eye. The image itself is the phishing link and the image hosted on a webpage.
Another way cybercriminals evade text analysis is by including so little content in the email that the filter is unable to deliver a verdict. In other cases, they will add homoglyphs, or look-alike characters from other languages, to the text. For example, to the naked eye, the Cyrillic letter “а” is barely, if at all, discernible from the Latin letter “a” in a domain name. Like the fingerprint obfuscation examples above, a phishing email with text obfuscation in the code looks like a brand new email to a filter, but to a user, it’s clearly the same email.
4. URL obfuscation
Credential harvesting via a phishing page is the ultimate goal of phishing. To get victims to the phishing page, they must click on the phishing link in the email. But, if the phishing link is known to the filter, the email will be blocked. To get around this, cybercriminals obfuscate the URL in a number of ways.
One method of URL obfuscation is through redirects, a mechanism for quickly sending a user from one webpage to another. In Microsoft phishing, the cybercriminal includes a legitimate Microsoft link in the email, but once the phishing email has successfully been delivered to the recipient, they redirect the Microsoft webpage to a phishing page. This technique is known as “time-bombing” the URL, and it’s extremely difficult to detect without real-time link exploration.
Another URL obfuscation technique is use of URL shorteners like TinyURL and Bitly. These free tools that are designed to create short URLs actually create URL aliases, obfuscating the phishing URL and tricking email filters designed to recognize phishing signatures.
Moving beyond fingerprint and reputation defenses
Fingerprint and reputation-based filtering were once the primary methods of blocking malicious emails. But as phishing techniques evolve, email security must evolve with it. Machine learning provides a level of analysis not possible with standard email security filters that rely on fingerprint and reputation-based detection. Machine learning models explore links in real time, analyzing the URL structure, redirections, elements of the page, and the structure of web forms. This time-of-click technology ensures that the URL leads to a legitimate webpage.
Computer Vision, a subset of Deep Learning, goes a step further than URL and webpage exploration, analyzing images rather than code. Computer Vision recognizes images that are commonly used—and reused—in phishing emails, including brand logos and text-based images, such as the image as a link example above. It can also view and analyze QR codes, which are frequently included in sextortion emails. Viewing images as humans see them, Computer Vision knows a phishing email when it sees it, regardless of what is in the code.