Government

Guide to AI development

by Mark Rowe

The UK official National Cyber Security Centre (NCSC) alongside the equivalent authorities in other countries, notably the United States’ federal Cybersecurity and Infrastructure Security Agency (CISA), has released guidance for development of AI (artificial intelligence) systems. Covered are design, development, deployment, and operation and maintenance, according to a ‘secure by default’ approach.

For more, visit the NCSC blog.

NCSC CEO Lindy Cameron said: “We know that AI is developing at a phenomenal pace and there is a need for concerted international action, across governments and industry, to keep up. These guidelines mark a significant step in shaping a truly global, common understanding of the cyber risks and mitigation strategies around AI to ensure that security is not a postscript to development but a core requirement throughout. I’m proud that the NCSC is leading crucial efforts to raise the AI cyber security bar: a more secure global cyber space will help us all to safely and confidently realise this technology’s wonderful opportunities.”

Comments

Paul Brucciani, cybersecurity advisor at the cyber platform WithSecure, said: “These early days of AI can be likened to blowing glass: while the glass is fluid it can be made into any shape, but once it has cooled, its shape is fixed. Regulators are scrambling to influence AI regulation as it takes shape.

“Guidelines are quick to produce since they do not require legislation, nonetheless, NCSC and CISA have worked with impressive speed to corral this list of signatories. Amazon, Google, Microsoft and OpenAI, the world-leading AI developers, are signatories. A notable absentee from the list is the EU.

“It is interesting to note that responsibility to develop secure AI lies with the ‘provider’ who is not only responsible for data curation, algorithmic development, design, deployment and maintenance, but also for the security outcomes of users further down the supply chain. “Providers should implement security controls and mitigations where possible within their models, pipelines and/or systems, and where settings are used, implement the most secure option as default.”

“AI-related security considerations for AI providers include:

  • Data security and privacy.
  • Model security, explainability, and susceptibility to bias and discrimination.
  • Infrastructure security and resilience.
  • Supply chain security.
  • Governance and regulatory compliance (eg EU AI Act).

“The guidelines require providers to:

  • Model threats to their system
  • Design security into their system rather than retro-fit it
  • Assess and monitor the security of your AI supply chains
  • Identify, track and protect their assets (eg models, data, prompts, software, documentation, logs)
  • Secure and continuously monitor the infrastructure on which the AI system is deployed.
  • Build AI scenarios into their security incident management procedures

“The strict rules of the EU’s AI Act will have a big global impact especially consider that the AI Liability Directive (distinct from the AI Act) will create a “presumption of causality” against AI systems developers and users, which would significantly lower evidentiary hurdles for victims injured by AI-related products or services to bring civil liability claims. China has similar initiatives relating to AI governance, though the rules issued apply only to industry, not to government entities.”

And Joseph Carson from cyber firm Delinea welcomed the announcement. He said: “I have been advocating for a ‘Secure by Default’ approach for several years, which is all about making security usable and enabled by default. This new guideline follows those exact principles for AI technologies.

 “It is also very timely to see the scope of what is considered AI, which expands into several definitions of machine learning. This is important in the guideline, as the scope of AI is very broad, and the guideline’s scope definition is clear and transparent. I hope to see more governments around the world join in with endorsing and applying these guidelines, which might eventually lead to some form of regulation to ensure that accountability will be enforced as well. The four key areas show that not only is secure by design important, but also secure development, secure deployment and secure operations and maintenance are all critical factors when it comes to AI systems.”

Related News

Newsletter

Subscribe to our weekly newsletter to stay on top of security news and events.

© 2024 Professional Security Magazine. All rights reserved.

Website by MSEC Marketing