TESTIMONIALS

โ€œReceived the latest edition of Professional Security Magazine, once again a very enjoyable magazine to read, interesting content keeps me reading from front to back. Keep up the good work on such an informative magazine.โ€

Graham Penn
ALL TESTIMONIALS
FIND A BUSINESS

Would you like your business to be added to this list?

ADD LISTING
FEATURED COMPANY
Cyber

Protecting against AI-enabled cybercrime

by Mark Rowe

New data shows that cybercrime will surge by 15 per cent throughout 2024. Looking at some of the most recent incidents, such as the ransomware attack on NHS London or the global IT outage that created loopholes for cybercriminals to use, the cybersecurity landscape is experiencing a tough year, says Phil Calvin, Chief Product Officer at the cyber and cloud product company Delinea.

AI is playing a significant role in this and is proving to be one of the reasons that organisations are falling victim to new ransomware attacks and data breaches. Cybercriminals are now adopting AI into their toolkits and using it to design sophisticated phishing and social engineering attacks that target identity theft and compromise individualsโ€™ credentials.

The evolution of the threat landscape

Long gone are the days when it was easy to spot common security hacks. While cybercriminals were inventive before, AI has accelerated the process, making new attacks and risks ever more frequent. AI has become a perfect addition to cyber weaponry, given its ability to automate attacks, as well as augment and update them in real-time. The result of which is a battle of AI algorithms between defenders and attackers.

One of the biggest threats that organisations face from cybercriminals using AI in their toolkits is the significant developments in phishing and social engineering that target identity theft. They can use AI not only to write emails or text messages, but make them sound legitimate, including tone of voice, language and style, and even personalise them based on publicly available data or information they provide. Cybercriminals are also deploying deepfakes to pose as trusted individuals on calls, manipulating victims to disclose sensitive information. Phishing campaigns that use generative AI are so advanced that it is now almost impossible to tell the difference between authentic and malicious communications.

Another increasingly common tactic used by cybercriminals who have adopted AI is Business Email Compromise (BEC), which uses AI and deep fakes to impersonate employees, typically with the motive of conducting financial fraud. As a result, organisations must not only verify human identities in a digital world, but also verify multiple digital identities that are all communicating. It is no longer enough to verify identities and provide access just once. This must be done at every interaction to reduce the risk of identity impersonation and fraud.

AI versus AI

The increased use of AI in cyberattacks means that businesses must be poised to adopt additional levels of protection, such as strong authentication, multifactor authentication, and Intelligent Authorisation. These tools provide crucial layers of defence that safeguard identities and credentials from AI-driven threats. Specifically, Intelligent Authorisation plays a vital role in managing the relationship between identity and data security by ensuring that only the right individuals have access to sensitive information at the right time.

However, businesses must quickly develop strategies that assess not only the risks that AI poses, but also consider future risks, meaning they need an adaptive and dynamic security strategy that can quickly evolve. Organisations can no longer rely on manual and human security processes that are slow and static – reducing security effectiveness over time. The use of AI in cybercrime means organisations must implement security controls and solutions that are dynamic and able to adapt to future threats. Such is the value that security platforms provide – as they introduce new algorithms quickly and turn on new features in real-time to ensure organisations have the security they need.

On top of that, businesses must not only defend their infrastructure from attacks that leverage AI, they must also mitigate against the risks of using their own AI technology – agents, Large Language Models (LLMs), algorithms and data sets – internally. A robust Identity and Access Management programme is essential for protecting AI from being poisoned or misconfigured. Making it critical that a strong privileged access strategy underlines the inclusion of all enterprise AI.

As AI continues to make cybercrime waves, itโ€™s key for organisations to stay vigilant and protect themselves. AI-enabled crime is not going anywhere and future proofing cybersecurity strategies with AI front of mind is what will make the difference as businesses navigate proliferating cyber threats.

Related News

  • Cyber

    Principles for AI use in OT

    by Mark Rowe

    The United States federal Cybersecurity and Infrastructure Security Agency (CISA) and the equivalent Australian Signals Directorateโ€™s Australian Cyber Security Centre (ASDโ€™s ACSC),…

  • Cyber

    Digital survey

    by Mark Rowe

    Only around half of security and operations leaders surveyed by the audit and advisory firm PwC say their organisation is โ€˜very capableโ€™…

  • Cyber

    Audit comment

    by Mark Rowe

    Cyber audit is not proof of security and an audit signโ€‘off can create an illusion of confidence, says Richard Puckey, Head of…