AI is changing cybersecurity faster than any technology before it. While it promises enormous potential from a cyber defence perspective, it has also become a weapon in the hands of cyber criminals, says Gerald Beuchelt, CISO, Acronis.
While plenty of malicious AI services have been shut down, cybercriminals are abusing legitimate ones or are accessing and training in-house systems to launch attacks.ย For example, AI-generated deepfake technology is being employed by scammers to impersonate well-known public figures to trick users into engaging in fraudulent investment schemes. One example in 2025 saw the likeness of Financial Times journalist Martin Wolf being used to promote fake investment opportunities. This reached nearly a million unsuspecting users across the Europe.
AI-powered phishing threats also continue to be on the rise as they are often low-cost and high-reward. As generative AI tools make it easier to craft convincing phishing content and automate ransomware campaigns, organisations both in the UK and abroad now face a new class of threat that blends automation with psychological manipulation.
The Acronis Cyberthreats Report H1 2025 reveals that phishing now accounts for 25 per cent of all global attacks. For managed service providers, phishing attacks rise to over half (52pc), which is a 22pc increase on the same period last year, as they are high-value targets with privileged access to numerous environments.
And the UK continues to be a prime target. The combination of high digital connectivity, complex supply chains, and inconsistent adoption of security controls makes the country an attractive target for cybercriminals seeking scale and impact. The UK Governmentโs Cyber Security Breaches Survey 2025 found that 43pc of businesses experienced a breach or attack in the past 12 months, and among those, 85% reported phishing as the mode of attack.
The automation of cybercrime
AI has dramatically lowered the barrier to entry for cybercrime, with a level of technical expertise that can now be executed by almost anyone, making it easier to generate fake websites, write credible phishing emails, and even create deepfake identities to bypass verification.
But deception is only the beginning. The same automation that fuels AI-driven phishing now powers ransomware campaigns at scale. The 2025 H1 report states that publicly known ransomware victims increased by nearly 70 per cent compared with both 2023 and 2024, with Cl0p, Akira and Qlin among the most active groups. These gangs are using AI-enabled tools to optimise reconnaissance and targeting, allowing them to run multiple operations simultaneously. This is a scary shift toward faster, more coordinated attacks that push defenders to their limits.
Phishing, meanwhile, is evolving beyond the inbox to exploit the collaboration and messaging platforms and apps that underpin modern work. As hybrid work has become the norm, attackers are shifting their focus from traditional email inboxes to collaboration tools such as Microsoft Teams, Slack and Zoom. These platforms are fertile ground for deception because they operate in real-time and are built on trust.
According to our research, nearly a quarter of collaboration app attacks now involve AI-generated deepfakes or automated exploits. A fake Teams message from a trusted colleague, a voice note that sounds authentic, or an AI-written meeting invite can all be used to distribute malware or harvest credentials. These tactics exploit the informality of modern communication, where familiarity often replaces caution. For organisations relying heavily on these tools, this marks a fundamental shift in social engineering risk.
The risk to the UK economy ย
These same collaboration and messaging platforms increase exposure for critical sectors that rely on interconnected systems and real-time coordination, making them particularly vulnerable to AI-enhanced threats.
Manufacturing remains the most targeted industry globally, accounting for 15 per cent of all ransomware cases in Q1 2025, followed by retail at 12 per cent and telecommunications and media at 10pc, according to our H1 cyberthreats research. These sectors form the backbone of the UKโs critical supply chains and infrastructure, meaning that disruption extends far beyond individual organisations and can rapidly cascade through the wider economy.
Despite the escalating threat, many UK businesses still lack the basic safeguards needed to mitigate AI-driven attacks. The same UK Government report referenced earlier found that only 40 per cent of organisations currently enforce two-factor authentication, leaving accounts and endpoints vulnerable to compromise. As AI tools make phishing more convincing and attacks more scalable, this lack of basic cyber hygiene exposes thousands of companies to unnecessary risk.
Harnessing AI for defence
There is no one-size-fits-all solution to defending against AI-enhanced threats, but organisations can turn the same technology to their advantage. The same tools enabling attackers can also be used to strengthen defence. Advanced AI-powered security solutions can help determine where a threat emanated from, uncover specific parameters used in transmissions, deliver sandbox and web protection capabilities to ensure harmful hyperlinks are caught before reaching recipients, and predict where threats are likely to emerge next.
While AI-aware phishing training helps employees recognise synthetic content that may look and sound authentic. Integrated endpoint protection systems that unify detection, response and recovery can provide full visibility and faster remediation. And adopting Zero Trust architectures (eg. privilege access, stronger identity verification and network segmentation), reduces the risk of lateral movement within networks.
But key to bringing all this together is maintaining continuous backup and recovery, ensuring that even when ransomware succeeds, operations can be quickly restored without paying a ransom. Together, these measures form the foundation of holistic cyber protection, ensuring that prevention, detection and recovery work in concert.
Adapting faster than attackers
AI has permanently altered the dynamics of cybersecurity. It has made attacks more scalable, more personalised, and more unpredictable. But it has also given defenders the opportunity to automate, anticipate, and adapt at the same speed. The future of UK cybersecurity will depend on how quickly organisations can turn this technology to their advantage. By investing in AI-enabled defence systems, improving employee awareness, and enforcing basic security standards, businesses can stay ahead of the curve. AI is neither hero nor a villain. In the wrong hands, it enables deception and extortion. In the right hands, it builds resilience, foresight and control.





