Government

Deploying AI systems – best practices

by Mark Rowe

In the United States, the federal National Security Agency (NSA) has released a Cybersecurity Information Sheet (CSI), on ‘Deploying AI Systems Securely: Best Practices for Deploying Secure and Resilient AI Systems’. It’s a release with equivalent bodies in other countries, including the UK’s official National Cyber Security Centre (NCSC).

The NSA says that while it’s intended for national security purposes, the guidance has application for anyone bringing AI into a managed environment, especially those in ‘high-threat, high-value’ places. You can read the 11-page document on this link.

It points out that models are software, ‘and, like all other software, may have vulnerabilities, other weaknesses, or malicious code or properties’. Hence the document covers access controls, audits and penetration testing, logging and monitoring, and patching. The authors conclude that systems should be ‘secure by design, where the designer and developer of the AI system takes an active interest in the positive security outcomes for the system’, and any AI should be validated before deployment. “In the end, securing an AI system involves an ongoing process of identifying risks,
implementing appropriate mitigations, and monitoring for issues.”

Comment

Dr Andrew Bolster, senior research and development manager for data science at the Synopsys Software Integrity Group, said: “This announcement from the NSA’s Artificial Intelligence Security Center continues a regulatory trend that we’re seeing across the world of doubling down on existing secure software development processes as the primary mode of AI system security, with an emphasis on cryptographic artefact tracking and SBOM maintenance.

“In addition, the information sheet includes a raft of recommendations that reflect the cross-functional responsibility of deploying and maintaining AI derived systems, encouraging Data Science and operational Cybersecurity teams to take on a collaborative role in both the development of, and constraint of, AI systems.

“This shared responsibility model encompasses not only the resultant models and the data that are used to train and monitor them, but of the data catalogues and data lineage records that support the safe and responsible use of AI systems.

“This trend aligns with other threads we see across the application security landscape, where security is less about playing vulnerability whack-a-mole, and more about establishing a secure, proactive, and resilient posture for managing application security.”

Related News

Newsletter

Subscribe to our weekly newsletter to stay on top of security news and events.

© 2024 Professional Security Magazine. All rights reserved.

Website by MSEC Marketing