Interviews

Simplifying our approach

by Mark Rowe

Defending against the AI hacker could be simpler than you think, writes Adam Maruyama, pictured, Field CISO at Garrison.

The widespread accessibility of open-source generative AI has all but guaranteed that hackers will use AI to their advantage. AI-enabled hackers will combine large-scale, automated attacks with exceptional social engineering skills and targeting, meaning these attacks will be not only widespread, but also almost impossible to detect – even for highly-experienced security professionals.

The advent of the AI hacker means that impersonal, easy-to-spot scams – such as the Nigerian Prince con – will likely be a thing of the past. Instead, AI models will scour online profiles and the wider Internet to find a credible colleague to impersonate and craft messages that look and sound exactly like the person they’re pretending to be. An AI-led phishing scam is more likely to appear to be from corporate executives, co-workers or even close family members, increasing the chances of an attack being successful.

An escalation of multi-stage hacking scams

Anyone engaging in a conversation with ChatGPT will quickly realise that AI’s abilities are not limited to single interactions. When it comes to cybercrime, the implications of this conversational ability are significant – the first phishing email could merely be the entry point of a longer, highly-believable exchange.

While it’s true that these multi-stage attacks are nothing new, until recently, pulling off these complex frauds was only possible for nation-state actors targeting carefully selected individuals. Most criminals simply didn’t have the necessary skills or resources to carry out these complicated and time-consuming attacks, and even nation-state actors had to pick their targets carefully to use resources efficiently. But the advent of generative AI has changed this, granting access to state-sponsored attackers, cybercrime syndicates, lone hackers and everyone in between. What’s more, these adversaries have widened the net to target everyday users – not just VIPs or privileged access targets. So how can enterprises defend against this growing threat?

AI as the security defender

Some people believe that AI’s abilities to detect spoofed emails or malicious code execution will be a panacea, but I’m not convinced. These capabilities will certainly play a part in protecting organisations’ systems, but the known and unknown fallacies of AI, such as its common tendency to “hallucinate,” could pose significant issues by generating false positives. For the AI hacker, this isn’t necessarily problematic – the worst that can happen is that their attack misses the mark. But for an AI defender, the stakes are much higher, as a single failure could have catastrophic consequences for the targeted organisation.

Compounding this issue is the fact that, while AI-powered defences may be able to isolate potentially compromised systems, it’s likely that human defenders will need to triage reports and mitigation before re-enabling access. This means that a high false positive rate could decrease availability of systems and productivity of employees even if attackers aren’t successful in moving laterally to compromise “crown jewel” systems.

Why employee security training doesn’t work

As the scale and sophistication of phishing attempts grows exponentially, it’s becoming painfully clear that training employees to identify and steer clear of scams is a fool’s errand. Is it appropriate to expect an employee to avoid and flag suspicious content that doesn’t actually look suspicious?

Let’s take AI-driven email spoofing – where AI is used to forge sender addresses at scale – as an example. Since many letters and numbers are hard to differentiate between in the most commonly used fonts, it is difficult for users to spot these phishing attempts (or at the very least, prohibitively time consuming), increasing the likelihood of the user opening the email and clicking into malicious links. Combine this with email content and links that appear credible thanks to generative AI, and it becomes increasingly unrealistic and deleterious to productivity for organizations to expect their employees to identify phishing emails.

The answer to this growing problem could be simpler than we think. One solution that works and has already been applied to great effect in industries such as banking is security notifications that pop up before users engage in potentially risky behaviour. These ‘security nudges’ alert users in real-time when potential security threats emerge and prompt them in-the-moment to actively think about risk.

For example, these nudges could be configured to pop up before a user inputs financial information into a website, or to flag up a risky-looking link that, to the employee, may appear perfectly safe. Nudges have a clear message and can incorporate arresting imagery to engage users and get them to make more considered and safer choices.

But nudges alone aren’t enough to stop adversaries who have the ability to exploit systems using the built-in functions of a web browser. In 2023 alone, eight zero-day vulnerabilities were identified in the stack that powers Chrome and Edge browsers, including one vulnerability that could exploit a user if they viewed an infected image on a webpage. To avoid these advanced technical attacks, organisations need to push code processing off of their systems. One effective control to do this is remote browser isolation, which processes unevaluated webcode on a virtually or physically distinct system. Combined with edge controls that ensure only trusted sites are exempt from this process, defenders can remove technical risk while mitigating the risk that a user will be tricked into providing sensitive data to adversaries.

The rapid evolution of social engineering techniques that can be applied on an industrial scale means we can no longer realistically rely on employees’ abilities to spot these scams.

Instead of relying on a combination of users, whose expertise and primary responsibility is not network security, and security technologies designed to detect threats once they’ve entered the perimeter, security professionals should pivot to preventing adversaries from accessing corporate systems in the first place. This is only possible through a combination of robust technical controls that mitigate technical risk and real-time nudges that notify users when they may be providing privileged information, such as credentials, to an untrusted party.

Combining robust security controls with real-time nudges that build awareness of security risk and prompt safer browsing behaviours in real time are the best – and perhaps the only – way to stay ahead of hackers’ attempts to trick users. By making deception more visible and protecting users from technical risks, these tools create better outcomes for their organisations. This will be critical in a world where suspicious threats don’t look suspicious.

Related News

  • Interviews

    Guardians after GDPR

    by Mark Rowe

    Adam Mayer, Technical Product Marketing, at the data analytics company Qlik, writes that during his college years he was quite the Depeche…

  • Interviews

    IoT in context

    by Mark Rowe

    The Internet of things (IoT) has everyone giddy, writes Neil Chapman SVP & MD EMEA / International, ForgeRock. You can record shows…

  • Interviews

    Cyber view

    by Mark Rowe

    With cyber attacks set to increase in frequency and complexity throughout 2015, organisations must adopt more sophisticated risk assessment and mitigation tools…

Newsletter

Subscribe to our weekly newsletter to stay on top of security news and events.

© 2024 Professional Security Magazine. All rights reserved.

Website by MSEC Marketing