Interviews

AI designed to exploit vulnerabilities

by Mark Rowe

Steve Rushin, Enterprise Sales Director at the cyber security company Red Helix, sees a dark side of generative AI.

Generative AI refers to any artificial intelligence (AI) that can be used to create new, previously unseen content, including text, music, images, code or any other form of data. The most famous of which, Open AI’s ChatGPT, has gained a huge amount of traction in the public eye –becoming the fastest-growing consumer application in history.

While there is a wealth of potential in this technology, with use-cases ranging from drafting emails and writing articles, to data augmentation and automated programming, there is also a great deal of risk. Much of the media coverage so far has focussed on the long-term dangers of artificial intelligence, from worries about job losses to technology bosses admitting their concerns about a full-scale, science fiction-style AI takeover. But there are also more immediate threats we should be aware of.

One such threat is that the functionality of generative AI, as it is now, has the potential to be weaponised by malicious actors and used to aid cyber-crime – turning a potent and beneficial technology into one that can cause a considerable cyber security risk.

The malware threat

Traditionally, the creation of malware would require an intimate knowledge of various programming languages and an understanding of vulnerabilities in different systems. However, generative AI holds the potential to change this. As previously mentioned, one of the highly useful features of this technology is its ability to create code, but this also means it holds the potential to write malicious code.

While there are protections in place to prevent this, these barriers can be navigated. A recent article from the Japan Times found that ChatGPT could be tricked into writing malicious code by simply entering a prompt that makes the chatbot respond as if it were in developer mode. Also, with this technology now widely available, there is the additional risk of criminals building their own generative AI models without any safeguards in place.

This significantly lowers the bar of entry into the field of cyber-crime. It removes the need for in-depth knowledge and experience in programming, with artificial intelligence stepping in to do the heavy lifting – meaning there is the possibility of more sophisticated, harder-to-detect malware being produced at a faster rate than ever before.

In addition to their malware creation capabilities, AI models could also be trained to identify and exploit vulnerabilities in computer networks. Currently, this task tends to be conducted manually, with cyber-criminals searching for weak points to target in an organisation’s network. However, training AI models to perform this task could significantly amplify the speed and scale at which this is done.

Not only that, but an AI-driven approach can also allow for more targeted attacks. Threat actors could train the AI to identify specific types of data, or specific network configurations, enabling attacks that could cause a considerably larger amounts of damage.

Refined social engineering attacks

Aside from the ability to write code and identify vulnerabilities, there is another threat presented by the increasing capabilities of generative AI. Social engineering attacks, which have been responsible for the majority of successful breaches on UK businesses in 2022, could become increasingly convincing and more quickly produced – exacerbating an already serious threat.

The primary concern here is that a dataset of successful phishing emails could be fed into the application. This would enable the AI to learn the language patterns, techniques and tricks that have proven successful in the past and closely mimic them, making it harder for recipients to identify these as malicious. Part of the data fed in could be produced by an individual within the company, such as an executive, allowing the AI to further enhance its impersonation.

There is also, again, the possibility that automating the generation of phishing emails will allow cyber-criminals to produce them at a scale unfeasible for human attackers. AI trained in social engineering would not only be able to send a vast number of initial emails but could then respond and learn how to adapt its strategy in real time.

Awareness is key

While the threat presented by AI may paint a bleak picture, it isn’t all doom and gloom. Alongside the possibility of nefarious use, it is important to remember that artificial intelligence is being harnessed to provide additional cyber security defences – identifying unusual network behaviour, detecting threats and generating and applying patches for newly discovered vulnerabilities.

The concerns around AI’s usage underscores the overall importance of having robust, up-to-date cyber defences that are fit for purpose. They also highlight the imperative for organisations to breed a culture of cyber awareness amongst their staff.

It is not only important that employees are aware of the additional tools generative AI provides criminals. They should also be cautious about its use. As a new and largely unregulated technology there are concerns about its privacy and data security, with some countries, for example Italy, banning the technology pending further investigation.

ChatGPT and other generative AI applications need to be treated with the same caution as any other online platform. They expressly state not to share sensitive information and, as a rule of them, users should avoid entering anything they wouldn’t want to see posted on the internet. That’s not only because the principle of these technologies is to learn from the data entered, but also because they have become prime targets for attack due to the data they hold – with Open AI itself confirming a breach in early May.

Artificial intelligence is undoubtedly an incredibly exciting technology, and one that has the potential to revolutionise the way we live. But it is also important that we balance the excitement swirling around these tech advancements with acknowledging the associated risks and ensuring both our security environment, and our personnel, are equipped with the right tools to protect against it.

Related News

Newsletter

Subscribe to our weekly newsletter to stay on top of security news and events.

© 2024 Professional Security Magazine. All rights reserved.

Website by MSEC Marketing