Can AI really manage cybersecurity for critical infrastructure? asks Kevin Curran, pictured, IEEE senior member and professor of cybersecurity at Ulster University.
With new cyber threats constantly emerging, artificial intelligence (AI) can play a pivotal role in safeguarding critical national infrastructure (CNI) and other vital systems. For security teams and chief information security officers (CISOs), it offers significant advantages, including improved detection and response capabilities, scalability, cost efficiency and predictive analytics.
However, it also raises several challenges or concerns. As AI is integrated further and further into the UK’s critical infrastructure and security systems, one particularly important question emerges – can AI alone be trusted to manage cybersecurity vital assets?
While AI can detect and respond to threats faster than humans, security teams cannot ignore concerns over its reliability when operating autonomously. Human oversight is essential, even with the most advanced AI security tools, to ensure they function effectively. If AI experiences a failure while guarding critical services, the consequences can be severe.
Weighing up the risks: to AI or not to AI, that is the question?
Given how stretched security teams are when it comes to time and resources, AI can provide advanced threat detection and response mechanisms. From scanning network activity to identifying patterns and processing vast amounts of data in real-time. In doing so, teams can detect anomalies and respond far more proactively, stopping attackers in their tracks and preventing significant damage.
It is also particularly helpful when it comes to automating routine tasks, such as helping analysts to filter incident alerts. By employing AI defence mechanisms, security teams will improve overall threat detection accuracy, significantly reducing the number of false positives, saving valuable time and enabling to focus on other important tasks. AI systems are also far more adaptable and can be easily scaled with network demands to ensure continuous protection and increased resilience.
However, AI also comes with significant risks. With the added complexity, initial cost of setup and regular maintenance, particularly as AI systems require access to large amounts of data, this can be prohibitive for public sector or critical infrastructure organisations with limited budgets. Teams can also become over-reliant on AI, potentially sidelining crucial human judgment and leaving systems susceptible to sophisticated cyberattacks.
Other concerns … practical and ethical issues
Integrating or deploying AI within critical infrastructure and valued assets also poses a number of ethical and practical challenges. Practically, AI must ensure high reliability and safety, integrate smoothly with existing legacy systems, scale effectively with growing demands, and guard against sophisticated cybersecurity threats. Ethically, the use of AI raises issues of transparency and accountability, as decision-making processes can often be opaque. Concerns about bias and fairness can also arise because AI may perpetuate existing disparities if trained on skewed data.
Privacy is another critical issue, as AI involves extensive data collection which must be managed securely. The shift towards AI may also affect human control and employment, raising questions about job displacement and the ethical implications of automated decision-making.
There is also the potential for AI to be exploited by cyber attackers, using it to create malware or automate attacks, necessitating continual evolution of defence systems to keep pace with AI-enhanced threats. When implementing AI, all these risks must be addressed with contingency plans in place. Therefore, a long-term dependence on AI could create vulnerabilities and inhibit innovation in traditional processes. Addressing these issues requires careful consideration of both technical performance and broader societal impacts, ensuring AI’s benefits are maximised while minimising potential harm.
A balancing act – automation versus human supervision
Moving forwards, the only way AI can be integrated successfully within the UK’s critical infrastructure is with an approach where the AI and human roles are clearly defined. Ideally, AI should handle data-intensive tasks, with humans managing decisions involving critical judgment and thinking.
Integrating hybrid decision-making models can help by using AI to provide data analysis and recommendations while humans make the final decisions. Therefore, a key option is to have a “human in the loop” to add security and ensure a sanity check on responses. Reinforcement learning with human feedback (RLHF) tunes the model based on human rankings generated from the same prompt. Building on RLHF, Constitutional AI also uses a separate AI model to monitor and score the responses the main enterprise model is outputting. These results can then be used to fine-tune the model and secure it from harm.
Transparency is also crucial, with AI systems designed to be understandable to users. Continuous training ensures the workforce can operate alongside AI effectively, while robust oversight mechanisms like audits and compliance checks maintain AI within ethical and performance standards. Finally, fostering a collaborative culture helps view AI as a tool to enhance human capabilities, rather than replace them. By managing these aspects, organisations can achieve a harmonious balance where AI complements human strengths and addresses potential weaknesses.
Aa brighter future
AI solutions will expand beyond traditional IT security, encompassing physical security, internet of things (IoT) and other critical domains, indicating a future where AI not only strengthens defences but also reshapes the entire cybersecurity landscape. As this occurs, organisations will also increasingly look to incorporate generative AI in their platforms for tasks involving code or text. Examples of integration may include security information and event management (SIEM).
Automation is projected to become more prevalent in security operations, enhancing efficiency and allowing human teams to focus on strategic issues. AI will also improve in threat detection and response, becoming capable of identifying subtle anomalies and reacting quickly to breaches.
However, with the rise of AI, adversarial techniques will also evolve, prompting the need for sophisticated AI-driven countermeasures. Regulatory and ethical considerations will become increasingly important, with more stringent guidelines expected to govern AI’s use in cybersecurity, especially when applied within vital services or broader infrastructure.
All in all, AI in cybersecurity is set to bring significant advancements and challenges along the way. AI’s predictive capabilities will continue to grow, allowing for early detection of potential cyber threats, and reconfiguring the cybersecurity space. It is up to CNI security leaders to use their judgement and decide how and when to best apply AI.




