TESTIMONIALS

“Received the latest edition of Professional Security Magazine, once again a very enjoyable magazine to read, interesting content keeps me reading from front to back. Keep up the good work on such an informative magazine.”

Graham Penn
ALL TESTIMONIALS
FIND A BUSINESS

Would you like your business to be added to this list?

ADD LISTING
FEATURED COMPANY
Cyber

Resilience and AI risk

by Mark Rowe

In October, the UK put a spotlight on cyber resilience with the release of the NCSC’s 2025 Annual Review. CEO Richard Horne warned that failing to prepare for cyberattacks risks a company’s future. The urgency behind this statement is backed by data: the NCSC handled 204 major cyber incidents between September 2024 and 2025, and 43pc of UK businesses reported a breach in the past year. The review was swiftly followed by an open letter from government ministers urging CEOs to “make cyber a board-level priority.” The message is clear, cybersecurity is no longer optional or reserved for large enterprises; it’s a strategic imperative for every organisation, writes David Morimanno, Field CTO NA, Xalient.

 

Cyber threats are clearly escalating and recent attacks on household names like Marks & Spencer, Co-Op, Jaguar Land Rover (JLR), and Harrods have exposed vulnerabilities across sectors. JLR is estimated to have cost the UK economy more than £2 billion, particularly when you factor in supply chain disruption. But the impact goes far beyond the balance sheet, thousands of livelihoods have been affected. For Critical National Infrastructure (CNI), which underpins public safety and economic stability, the consequences of a successful cyberattack could be catastrophic and unfortunately, based on track record, it’s not a question of if, but when.

A Double-Edged Sword?

The NCSC Annual Review also highlights the growing role of artificial intelligence (AI) in cybersecurity. It reaffirms guidance such as the AI Security Code of Practice, which focuses on securing AI model development and deployment. However, AI is not just a defensive tool, it’s transforming both sides of the cyber battlefield. Its ability to automate, scale, and adapt introduces new tactics and challenges, making it a powerful force multiplier for attackers and defenders alike.

On the defensive front, AI is reshaping cybersecurity through advanced threat detection and automated response. Its use in vulnerability scanning and anomaly detection is expanding rapidly, with machine learning helping identify threats that traditional systems often miss. Microsoft’s Copilot and Purview are great examples of this shift: Copilot integrates with security platforms to streamline threat analysis and automate incident response, while Purview enhances data governance through AI-driven classification and monitoring. These tools offer real-time insights and faster triage, which are critical for CNI operators who must maintain uptime and safety. However, a key challenge remains. Cost. As AI tools scale their hunting and correlation capabilities, operational expenses rise.

 

Next wave of attacks

Meanwhile, attackers are increasingly using AI to launch sophisticated, evasive campaigns. Deepfake voice and video fraud targeting executives has already occurred, and tools like Promptlock demonstrate how AI-generated prompts can automate lateral movement and privilege escalation. AI’s speed and adaptability could soon enable polymorphic malware that rewrites itself to evade detection. Hackers are now using smarter techniques to make their attacks more effective. One method, reinforcement learning, helps them adjust and deliver harmful software in real time. At the same time, advanced malware like Emotet can use AI to study a computer’s security and choose the best way to sneak past it, making it much harder for defenders to keep systems safe.

Companies like Anthropic are actively researching ways to make AI systems more resistant to adversarial manipulation. Their work on constitutional AI and red-teaming large language models (LLMs) shows how attackers might exploit prompt injection or model behaviour to generate harmful outputs or bypass safeguards. This underscores the dual aspect of AI, where the same tools that enhance productivity can also be weaponised.

One of Anthropic’s most cited examples is the “Claude Plays Pokémon” experiment. Researchers embedded hidden instructions within a seemingly harmless task, causing Claude to behave in unintended ways. The goal was to test how easily an LLM agent could be hijacked or redirected without explicit malicious input. This kind of manipulation could be devastating if applied to AI systems embedded in CNI environments, where even minor deviations could trigger cascading failures.

Can AI plan ahead?

As AI-powered malware becomes more independent, experts are asking whether AI can plan ahead or change its goals. In March 2025, researchers at Anthropic found that their AI model, Claude, seemed to organise its thoughts before writing poetry, hinting that it might have some idea of what it wanted to produce. They ran tests to show that parts of the model held these early ideas. But not everyone agrees. A team from Oxford Martin argued in July 2025 that just because AI explains its steps doesn’t mean it’s truly thinking. They believe these explanations can be misleading and suggest using deeper testing methods to better understand how AI really works.

Together, these studies highlight the tension between AI’s apparent planning and its underlying mechanics. This raises questions about control, transparency, and trust as AI systems become more embedded in critical infrastructure and potentially used by adversaries.

Understanding AI, defending against it

Knowing what’s happening inside the AI black box is vital, as it allows us to spot behaviour and any lurking risks before they surprise us. However, the bottom line is, if we do not truly understand what AI models are doing, we cannot anticipate what AI malware will do, especially when it has access to local resources.

This uncertainty means we need more than just technical solutions. For CNI, the stakes couldn’t be higher. We need trusted partners, deeper operational capability, dynamic security controls, and relentless vigilance. As AI becomes embedded in the systems that power our economy and safety, from energy grids to transport networks, resilience must be built into every layer. That includes not just code and hardware, but governance, training, and leadership. Cybersecurity must be treated as a living discipline, evolving alongside the threats it seeks to contain.

Ultimately, the fusion of AI and cybersecurity presents both a challenge and an opportunity. For CNI operators, the path forward requires embracing innovation while maintaining rigorous oversight. The goal is not just to defend against today’s threats, but to anticipate the threats of tomorrow. And that starts with understanding the tools we’re using and the ones being used against us.

Related News