TESTIMONIALS

“Received the latest edition of Professional Security Magazine, once again a very enjoyable magazine to read, interesting content keeps me reading from front to back. Keep up the good work on such an informative magazine.”

Graham Penn
ALL TESTIMONIALS
FIND A BUSINESS

Would you like your business to be added to this list?

ADD LISTING
FEATURED COMPANY
Interviews

Double-edged sword of AI in cyber

by Mark Rowe

Artificial intelligence (AI) in cybersecurity is not new and has been used for decades to detect unusual activity. But since development accelerated in the last few years with OpenAI, Google and other technology giants launching their own large language models (LLMs), its use has expanded to cover new use cases like attack surface management, writes Dr Sebastian Schmerl, Vice President Security Services EMEA, at the cyber firm Arctic Wolf.

This is both a blessing and a curse for cybersecurity workers as they can use it to help analyse data and identify threats, reducing the workload on employees. But it is also used by criminals to supercharge their own attacks by helping them scan for vulnerabilities, generate convincing phishing content, and developing adaptive malware.

To overcome the double-edged sword of this technology it is crucial to understand how criminals are using it, and where it can truly add value to defenders. By taking this approach, cybersecurity teams will put themselves in the best position to succeed over the coming years.

Partners in crime: how AI helps threat actors

LLMs and generative AI (gen AI) are powerful tools as they lower the barrier to entry for threat actors to develop malware, find vulnerabilities, write more convincing personalized phishing emails, and automate attacks. As a result, the speed, reach, and sophistication of attacks have significantly increased.

A few years ago, phishing emails could be identified by spotting awkward phrasing and spelling errors, with implausible stories to trick recipients into clicking links or transferring money. Today, thanks to gen AI, the quality has increased, making it harder to detect if an email is from a legitimate manager instructing you to read attachments, change banking details or execute transactions, or a criminal out to trick you. AI even enables cybercriminals to generate audio, video, or image recordings of managers, COEs or other authority figures, such as when an Arup employee was tricked into transferring millions of dollars. Just a few photos are now enough to create realistic deepfakes for video calls, complete with imitated voices and faces, or to create misleading media of politicians to sow confusion and manipulate public perception.

A disturbing recent example involves alleged deepfake use of Marco Rubio via Signal. A threat actor created text and audio messages mimicking the United States Secretary of State and contacted senior US politicians and foreign ministers. All the attacker needs is an image of the person they want to imitate. This incident illustrates how cybercriminals and politically motivated actors can create deepfakes to damage politicians’ reputations, spread misinformation, and destabilise societies. It is crucial to quickly and reliably verify content authenticity – otherwise, there’s a real danger that major political decisions could be based on fake information, or that similar incidents may continue to occur.

Strengthening defences with AI

Fortunately, AI is also bolstering defences. For example, it is helping to detect deepfakes by learning the differences between real and fake identities, identifying inconsistencies in image and video content – such as facial glitches – and analysing other anomalies in audio, video, and image data.

Machine learning (ML) and LLMs also help security teams detect unusual behaviour and analyse threats. For instance, if a marketing employee suddenly accesses accounting data, an alert is triggered. In case of security incidents, AI can also assist by providing action recommendations based on previous events and responses. These recommendations work similarly to Amazon’s product suggestions. For example, “Security managers who responded to similar incidents took the following actions.” This empowers IT teams even if they have limited cybersecurity expertise. Vulnerability management especially benefits from AI, which is increasingly important as the number of known vulnerabilities has skyrocketed from around 6,500 in 2015 to over 40,000 in 2024.

Keeping the human touch

AI also offers opportunities to boost efficiency within the Security Operations Center (SOC), the hub for security monitoring and analysis. Security analysts can use AI to generate natural language threat reports from highly technical forensic artifacts tied to incidents, offering valuable insights by well understandable information for different cyber security skill levels (IT-Security-Expert versus CISO, versus CEO). It can also group and identify and categorise alerts by root causes for streamlining rapid incident response and root cause and not reacting to symptoms only.

Another benefit of AI in cybersecurity is the improvement of human-machine interaction. LLMs enable natural language queries instead of complex commands, allowing security personnel to interact more intuitively with IT systems, a crucial advantage for teams without deep security domain expertise.

Furthermore, whether in offense or defence, AI always requires human oversight. Smart algorithms can optimise processes, produce analyses, reduce alert fatigue, and suggest actions. But the final decision remains with human experts.

AI is not a silver bullet

Employees and security officers should still exercise caution when using AI for of all since an LLM will always an answer to a question and this can be quite convincing, but wrong. But also:Any information entered into a freely available LLM is also added to the training data or profile building of the user. This means, as with all web services, uploaded data is used in one way or another. In the case of LLM, for example, the information entered is made available to other users. Confidential data should therefore never be copied to a public LLM. Broadly, the rule here is ‘if the use is free, the users pay with their data.’ Administrators who put their entire configuration on ChatGPT to be supported in solving IT problems will sooner or later be presented with the bill for this as well.

The next step in cyber defence

The next generation of AI will be capable of discovering entirely new attack vectors using improved reasoning capabilities and be able to create even more convincing deepfakes. The good news is that what attackers use today will be used by defenders tomorrow. So, the same old hare-and-hedgehog race between attackers and defenders will go on.

The future of successful cyber defense lies in strategic collaboration between humans and machines. Those who combine both wisely will not only detect and repel threats more effectively but also be better prepared for the cyber risks and attacks of the years ahead.

Related News

  • Interviews

    Cyber: 2025 and beyond

    by Mark Rowe

    AI will be ‘top strategic priority’ for 2025, but what does that mean for cybersecurity? asks Kevin Curran, pictured, IEEE senior member…

  • Interviews

    Election insecurity

    by Mark Rowe

    David Critchley, Regional Director of UK and Ireland at the cyber platform Armis draws insights from new research to showcase the risk…