Artificial intelligence (AI) will almost certainly increase the volume and heighten the impact of cyber attacks over the next two years, according to the UK official National Cyber Security Centre (NCSC).
Its report, titled ‘The near-term impact of AI on the cyber threat‘ states that all types of cyber threat actor – state and non-state, skilled and less skilled – are already using AI. Cyber attacks against the UK will become ‘more impactful’, the report says, because threat actors will be able to analyse exfiltrated data faster and more effectively, and use it to train AI models. AI lowers the barrier for novice cyber criminals, hackers-for-hire and hacktivists (hackers who are doing it in the name of some activist cause) to carry out ‘access and information gathering operations’.
Looking towards 2025 and beyond, the report predicts ‘commoditisation of AI-enabled capability in criminal and commercial markets’, whereby anyone can pay for ‘as-a-service’, AI-enabled cyber tools. Also in that short to medium, foreseeable term, the NCSC suggests that generative AI (GenAI) and large language models (LLMs) will make it difficult for everyone, regardless of their level of cyber understanding, to assess whether an email or password reset request is genuine, or to identify phishing, spoofing or social engineering attempts. The time between release of security updates to fix newly identified vulnerabilities and threat actors exploiting unpatched software is already reducing.
All in all, ‘UK cyber resilience challenges in the near term for UK government and the private sector’ alike, the report says.
NCSC CEO Lindy Cameron said: “We must ensure that we both harness AI technology for its vast potential and manage its risks – including its implications on the cyber threat. The emergent use of AI in cyber attacks is evolutionary not revolutionary, meaning that it enhances existing threats like ransomware but does not transform the risk landscape in the near term. As the NCSC does all it can to ensure AI systems are secure-by-design, we urge organisations and individuals to follow our ransomware and cyber security hygiene advice to strengthen their defences and boost their resilience to cyber attacks.”
Comments
Jake Moore, Global Cybersecurity Advisor, at the cyber firm ESET, said: “AI has simply increased the power enabling cybercriminals to act quicker and at scale. Furthermore, whilst past and present phishing emails are fed into the algorithms and analysed by the technology, the better the outcomes naturally become. The volume of such attacks will inevitably increase but until we find a robust and secure solution to this evolving problem, we need to act now to help teach people and businesses in how to protect themselves with what is available.
“Social engineering has an impressive hold over people due to human interaction but now as AI can apply the same tactics from a technological perspective, it is becoming harder to mitigate. Trust is often difficult to gain but clever social engineering techniques often pressure people to bypass their default defences. Ultimately we need to educate people about these new attacks and to think twice before transferring money or divulging personal information when requested.”
Mike Newman, CEO of My1Login spoke of a cyber security nightmare, from an enterprise perspective. “Phishing is the number one cybercrime tactic today and it is the most common method criminals utilise to steal corporate passwords from employees. The threat provides big returns for criminals, but it isn’t always successful because phishing emails often contain spelling errors, or strange imagery that raise red flags for recipients and make them think the emails could be fake.
“With AI, all these tell-tale signs are completely removed. Font, tone, imagery and branding is all perfect with the emails generated via AI, which will make them much harder to detect as malicious.
“The only way to counter this threat is to remove valuable information, like passwords, from employee hands so they don’t have the ability to disclose them to phishing actors.”
And Mike Kiser, Director of Strategy and Standards, at the cyber firm SailPoint warned not only of phishing emails and the like, but also full-on generation of fake sites, and ‘rapid mining of social networks to lend legitimacy to interactions that are intended to exploit their targets’. “Less likely but still a potential threat is the rapid dissemination of malware techniques (especially nascent ones). ChatGPT is a common base of knowledge, so any information it has can spread fairly rapidly.”





