Interviews

Three attack scenarios for age of AI

by Mark Rowe

Lavi Lazarovitz, Vice President of Cyber Research, at the cyber research firm CyberArk Labs, offers three attack scenarios for the age of AI.

Artificial intelligence (AI) technologies, especially Generative AI tools, are already having a profound impact on modern society. Each day, new services are introduced that infiltrate more aspects of our lives; they can do your kids’ homework, help you make better investment decisions, turn your selfie into a renaissance painting or write code on your behalf.

ChatGPT and other generative AI tools can be powerful forces for good and countless positive use cases are already emerging. But they’ve also enabled a tsunami of innovation by cyber attackers looking to use these tools for malicious purposes, creating mounting global concern. Recently, the head of the U.S. Federal Trade Commission emphasised the need for vigilance, noting that the FTC is already seeing AI being used to “turbocharge” fraud and scams.

But what would these attacks look like if they were to come to fruition? Here are three scenarios that attackers are already exploring:

Scenario 1: Vishing

Companies invest substantial time and effort in educating employees to be cautious about the warning signs of phishing emails. However, imagine you are sitting at your desk and receive a WhatsApp message from your company’s CEO asking you to transfer money urgently. The message contains his profile picture and you hear his voice in the recording. Although it seems peculiar since he has never contacted you through this platform before, you still ask him for the payment details and he provides you with everything you need immediately. You assume that it is indeed the CEO who contacted you – but it is not as simple as that.

Attackers can use AI text-to-speech models to mimic anyone – company executives, celebrities, or even presidents of countries – and obtain sensitive details and credentials by creating high levels of false trust with their targets. This type of “vishing” campaign can be carried out efficiently and be very difficult to detect, making it alarming for cybersecurity professionals and everyone else. AI specialists anticipate that AI-generated content will be almost indistinguishable from content generated by a human in the future, posing various challenges.

Scenario 2: Biometric authentication

Moving from audio-based attacks to visual, facial recognition is a widely-used option for authentication, but it can be compromised by attackers using generative AI tools. According to a study by threat researchers at Tel Aviv University, a “master face” or “master key” could be created using GANs (Generative Adversarial Networks) to match facial images stored in a large repository, allowing attackers to bypass most facial recognition systems. While conducting their study, the researchers were able to produce a set of nine images that matched over 60% of the faces in the database, giving attackers an excellent chance of success in bypassing facial recognition authentication to compromise an identity.

Generative AI models have been around for some time, but the scale of today’s models is what’s generating buzz. GPT-3 alone was able to learn 100 times more parameters than ChatGPT-2, enabling it to create incredibly realistic deepfakes and malware – and now we have GPT-4, which is even more advanced. As AI models continue to learn, they will become increasingly skilled at creating dangerous threats, fundamentally changing the threat landscape.

Scenario 3: Polymorphic malware

Currently, various experiments are being conducted by developers and researchers using generative AI to write different types of code, including malware. These large language models (LLMs) are enthusiastic, yet naïve, developers. They write code quickly but miss critical details and context. CyberArk Labs research found that defence evasion using AI-generated polymorphic malware – or malware that mutates its implementation while keeping its original functionality intact – is viable. For example, an attacker could use ChatGPT to generate (and continuously mutate) information-stealing code for injection. By infecting an endpoint device and targeting identities – locally stored session cookies for instance – they could impersonate the device user, bypass security defences and access target systems while staying under the radar. As AI models improve and attackers continue to innovate, automated identity-based attacks like these will become part of malware operations.

Countering attack innovation with AI-powered defence

AI is already transforming the threat landscape and attackers are finding new opportunities to target identities and bypass authentication using AI-based tools. This is quickly making identity compromise the most efficient way for them to move through environments and access sensitive systems and data.

As the cyber security industry’s threat research with a focus on AI progresses, it is crucial to recognise that AI is also a potent tool for cyber defenders; it plays a pivotal role in countering the ever-changing security landscape, enhancing agility, and enabling organisations to stay one step ahead of attackers. This signifies a promising new era in which cybersecurity deployments are simpler, highly automated, and more impactful. Organisations can effectively mitigate threat – both now and in the future – by harnessing AI to optimise identity security measures, specifically around human and non-human identities.

Related News

  • Interviews

    Mansfield upgrade

    by Mark Rowe

    Mansfield District Council is to replace its old fibre optic CCTV system with a wireless system. The Nottinghamshire council’s CCTV control room…

  • Interviews

    Mitigating the threat

    by Mark Rowe

    Cybercriminals have the capacity and resources to carry out mass-targeted attacks that can inflict a great deal of destruction on an organisation,…

Newsletter

Subscribe to our weekly newsletter to stay on top of security news and events.

© 2024 Professional Security Magazine. All rights reserved.

Website by MSEC Marketing