Cyber

AI and the cyber landscape

by Mark Rowe

As Artificial Intelligence (AI) disrupts almost every sector of the economy, opportunistic cybercriminals are experimenting with its malicious applications. In July, reports emerged that large language models (LLMs) specialised in generating malicious content were being sold on the dark web. James Tytler, pictured, Cyber Associate, at the corporate intelligence and cyber security consultancy S-RM, explores how cybercriminals could exploit AI and alter the threat landscape, but also how the threat from these tools can be easily overstated.

The rise of ‘dark’ chatbots

LLMs are a form of generative AI trained on vast amounts of written input. They produce human quality text, images, code, and other media in response to prompts, and their rapid development has sparked concerns from the security community that malicious actors will use them to code malware or draft convincing phishing emails. In April, Europol warned that LLMs could be abused to commit fraud “faster, much more authentically, and at a significantly increased scale”.

In response to these concerns, the developers of major publicly accessible LLMs have hastily introduced constraints on what prompts their chatbots will accept. This includes denying so-called “jailbreaking” prompts intended to bypass ethical boundaries. OpenAI’s ChatGPT, the first commercially available LLM, once allowed users to request a phishing email by presenting it as part of a training exercise. ChatGPT now refuses to engage with any prompt containing terms such as “malware” or “phishing”, regardless of context.

In early July, however, cybercriminals began selling access to what are described as unrestricted “evil clones” of ChatGPT on the dark web. The most well-known, “WormGPT”, is allegedly offered on a subscription basis and marketed for its ability to produce phishing emails and malware. Following a spike in media attention, the purported developer of WormGPT announced price hikes on Twitter.

Gauging the chatbot threat

The capabilities of generative AI have increased at an exponential rate, but there remain several reasons to be sceptical about the current threat posed by these malicious LLMs. The computational resources required to run LLMs at scale are enormous, and it’s unlikely that a rogue model could match the performance of major commercial LLMs developed by OpenAI or Google. WormGPT, for example, is believed to run on open-source GPT-J, released in 2021. This falls significantly short in power compared to GPT-3.5, used in the freely available version of ChatGPT, let alone GPT-4 – the most recent paid release.

As a result, it’s worth questioning whether unconstrained LLMs are genuinely useful to threat actors at this stage. While undoubtedly good at producing well-written emails, their ability to generate code is less convincing. Alleged proofs of concept for using ChatGPT to code malware have involved significant handholding from knowledgeable developers and been of limited practical use. The prevalence of LLMs has led to a decline in spelling and grammatical errors in phishing emails, but these errors often serve as a means for fraudsters to reach gullible victims and evade spam filters looking for specific keywords and phrases.

The impact that threat actors having access to LLMs might have on the sophistication of phishing campaigns is difficult to measure. WormGPT, in itself, might be more of a gimmick than a game changer, but its appearance is indicative of the many ways AI will shape cyber threats in the future, as well as the influence of an economic incentive for cybercriminals to experiment with LLMs and other forms of AI.

Medium term threats

Ransomware groups – highly professionalised, well-funded and potentially linked to governments and adversarial nation states – are more likely to invest in AI-driven research and development. While it is difficult to predict the exact form this will take, we expect medium-term developments to include leveraging AI to identify vulnerabilities in corporate networks and gain initial access.

Indicative of this trend is the release of PentestGPT, an automated, open-source penetration testing toolkit. It can reportedly solve easy-to-medium “capture the flag” challenges used by human hackers to test their skills and is expected to streamline the process of identifying weak points in networks. PentestGPT and similar tools are in their infancy, but advancements in machine learning could enable malicious actors to identify previously unknown “zero-day” vulnerabilities.

AI’s potential to clone a person’s visual likeness, individual mannerisms and movement patterns, also presents risks. An individual’s voice can be cloned from just a few minutes of recorded speech, as demonstrated by recent spoofing attacks and security researchers who test voice recognition systems with attempts at tax fraud and bank theft. The pairing of these biometric technologies threatens to bring even more sophisticated social engineering attacks. In an era of remote working, you may no longer be able to trust that you’re speaking to the person you think you are on a virtual call.

How organisations can protect themselves

In light of these rapidly evolving threats and the declining effectiveness of traditional signature-based antivirus tools, organisations should ensure they have artificial intelligence in their defensive arsenals. Email filtering technology leverages AI to ensure that even the best-crafted phishing emails do not reach end users, and Modern Endpoint Detection and Response (‘EDR’) platforms use machine learning to detect active threats based on their behaviour.

However, even with the deployment of AI in defence tooling, there is no substitute for critical thinking. Humans are one of the most vulnerable attack vectors. A well-informed and engaged workforce is a critical first line of defence.

About the firm

You can subscribe to a cyber intelligence briefing from the firm. Based in London, S-RM has offices in Cape Town, Hong Kong, London, Manchester, New York, Rio, Utrecht, Washington DC and Singapore.

Visit www.s-rminform.com.

Related News

  • Cyber

    Brexit spend on cyber

    by Mark Rowe

    Since the announcement of Brexit, over half (53pc) of UK businesses have increased their cyber security spending, according to research from a…

  • Cyber

    IoT devices

    by Mark Rowe

    The internet is experiencing a new wave of advanced malware ‘Mirai’ after its source code leak by the culprit to hide among…

Newsletter

Subscribe to our weekly newsletter to stay on top of security news and events.

© 2024 Professional Security Magazine. All rights reserved.

Website by MSEC Marketing