TESTIMONIALS

“Received the latest edition of Professional Security Magazine, once again a very enjoyable magazine to read, interesting content keeps me reading from front to back. Keep up the good work on such an informative magazine.”

Graham Penn
ALL TESTIMONIALS
FIND A BUSINESS

Would you like your business to be added to this list?

ADD LISTING
FEATURED COMPANY
Cyber

Principles for AI use in OT

by Mark Rowe

The United States federal Cybersecurity and Infrastructure Security Agency (CISA) and the equivalent Australian Signals Directorate’s Australian Cyber Security Centre (ASD’s ACSC), have published Principles for the Secure Integration of Artificial Intelligence (AI) in Operational Technology (OT). 

The guide offers principles for critical infrastructure owners and operators safely and effectively to integrate AI into OT systems. The four key steps are:

  1. Understand AI: Educate personnel on AI risks, impacts, and secure development lifecycles.
  2. Assess AI Use in OT: Evaluate business cases, manage OT data security risks, and address immediate and long-term integration challenges.
  3. Establish AI Governance: Implement governance frameworks, test AI models continuously, and ensure regulatory compliance.
  4. Embed Safety and Security: Maintain oversight, ensure transparency, and integrate AI into incident response plans.

CISA Acting Director Madhu Gottumukkala said that ‘AI holds tremendous promise for enhancing the performance and resilience of operational technology environments – but that promise must be matched with vigilance’. “OT systems are the backbone of our nation’s critical infrastructure, and integrating AI into these environments demands a thoughtful, risk-informed approach. This guidance equips organizations with actionable principles that AI adoption strengthens—not compromises—the safety, security, and reliability of essential services.”

Comment

Rob Demain, CEO e2e-assure commented thatAI is being adopted widely across IT, for efficiency and automation, but we are far from being able to secure it in IT, let alone OT. He said: “AI will introduce new systemic risks in OT environments, including model drift and mis-generalisation that can lead to unsafe control actions, safety-process bypasses when AI recommendations override manual checks, and an expanded attack surface: AI connectivity (APIs, cloud services) creates new ingress points into OT networks. The impact on OT is high because these environments control physical processes and any compromise can result in loss of life, environmental damage, and regulatory exposure.
“Right now, adoption of AI across industrial and critical infrastructure sectors is limited, however, predictive maintenance, anomaly detection, and optimisation tools are being integrated into OT workflows. We see some organisations piloting LLM-based assistants for engineering and operational support. While adoption will certainly accelerate, security controls are not keeping pace.
“AI attacks, however, are not just a theoretical risk. AI is already being used by adversaries to support them in everything from productivity enhancements to using LLMs for dynamic command generation to trick detections. We’re seeing signs that AI is being used to develop polymorphic malware and even that AI communication is being used as C2 channels (the communication link between the compromised device and the C2 server) that blend into legitimate traffic. Therefore, OT defenders must treat LLM API traffic and local model invocations with the same suspicion as unknown remote management tools, and expect ephemeral, AI‑generated payloads and target specific tool use that leave few lasting indicators and use the native tooling in the environment. Tactics like these would defeat traditional OT defences like signature-based detection and static IOC matching and security zoning. Furthermore, local LLMs themselves could be a huge target for attackers as they contain the type of data attackers are looking for and could help explain to attackers how best to attack the systems. Attackers already use existing tooling extensively, known as ‘Living off the land’ and some researchers have begun to use the term ‘Living off the LLM’ to reflect this.
“The latest advice from CISA is good in terms of keeping AI away from OT (ie. provide a read only data feed to it), sending data safely from OT to IT but not including AI where it could see/control OT systems. I do think they could go harder and discourage AI use on anything connected to OT. Safety first should mandate that these systems should be treated as a safety risk to operations at this stage.”

Related News

  • Cyber

    Comments on NCSC 2025 review

    by Mark Rowe

    CEO Richard Horne has unveiled the National Cyber Security Centre (NCSC) ninth Annual Review, titled ‘It’s time to act’. He said that…

  • Cyber

    Google on commercial spyware

    by Mark Rowe

    Spyware is typically used to monitor and collect data from high-risk users like journalists, human rights defenders, dissidents and opposition party politicians.…