Interviews

AI Safety Summit preview

by Mark Rowe

This week’s AI Safety Summit at Bletchley Park near Milton Keynes in Buckinghamshire has been hailed a first ever global summit to discuss AI safety, writes Mark Rowe.

A paper on the capabilities and risks from ‘frontier AI’  was published last week, to serve as a discussion paper at the event; again hailed as a first. Hence Technology Secretary Michelle Donelan hailed publication as ‘a watershed moment’, as the UK becomes the first country to ‘formally summarise the risks presented by this powerful technology’. The UK Government understandably has done all it can to talk up the summit; and the country’s AI credentials, whether in developing AI or to somehow regulate it and legislate for it.

In a speech to the Royal Society, PM Rishi Sunak spoke of AI as transforming as much as the industrial revolution, or the internet, and argued for making the UK ‘a global leader in safe AI’. He raised also ‘dangers and fears’: “Get this wrong, and AI could make it easier to build chemical or biological weapons. Terrorist groups could use AI to spread fear and destruction on an even greater scale. Criminals could exploit AI for cyber-attacks, disinformation, fraud, or even child sexual abuse.” At an extreme, extreme cases, ‘there is even the risk that humanity could lose control of AI completely’.

He positioned the UK as not in a ‘rush to regulate’, but in terms of ‘building world-leading capability to understand and evaluate the safety of AI models within government’. He announced the setting up of an AI Safety Institute.

As for what Mr Sunak hopes the summit will achieve, he spoke of a ‘shared understanding’ of the risks, and hoped for an ‘international statement about the nature of these risks’. In that vein, the BCS, the Chartered Institute for IT, asked for international ethical standards for those developing and managing AI.

Commentators have queried, first, whether the UK and policy-makers generally are working fast enough, as fast as AI is developing. Nadir Izrael, co-founder and CTO at Armis, said the Government is taking a wait-and-see approach, until a time when it understands AI and can create appropriate legislation. “But that doesn’t mean everyone can relax, while legislators get to grips with the threats of AI, organisations everywhere need to move fast. Cybercriminals are increasingly using AI in their attacks, so organisations must fight back with AI of their own. This means incorporating AI technologies such as machine learning algorithms and natural language processing into their cybersecurity strategies, alongside traditional tools.

“Organisations have an advantage in the AI arms race because they have access to more data than attackers, such as data about their computing environments, security capabilities, and known vulnerabilities. This data can be used to train AI models to identify potential threats faster and more accurately. There is no time to lose. It’s essential organisations invest in the right systems that are able to detect and prevent the malicious use of generative AI and analyse large amounts of data to identify anomalies. In short, regardless of the government’s approach, security cannot wait.”

Another query is whether the UK has the geopolitical power to make a different. Matthias Holweg, Professor of Operations Management, Saïd Business School, University of Oxford, said: “What the UK may or may not decide, is quite frankly irrelevant to most AI operators.  AI regulation will be decided between the US lawmakers, the EU, and the big tech firms.”

Meanwhile, the United States President Joe Biden today issued an executive order on safe, secure and trustworthy use of AI.

It requires that developers of the most powerful AI systems share their safety test results and other critical information with the US government, before tech companies make such AI systems public. To meet the Defense Production Act, the companies developing any foundation model that poses a serious risk to national security, national economic security, or national public health and safety must notify the US federal government when training the model, and must share the results of red-team safety tests.  As for standards, the US federal National Institute of Standards and Technology will set standards for red-team testing before public release of AI. The federal Department of Homeland Security will apply those standards to critical infrastructure sectors and establish a AI Safety and Security Board. The Departments of Energy and Homeland Security will also address AI systems’ threats to critical infrastructure, as well as chemical, biological, radiological, nuclear, and cybersecurity risks. Biden describes these as the most significant actions ever taken by any government to advance AI safety.

As for AI-enabled fraud and deception the order speaks of establishing standards and best practices for detecting AI-generated content and authenticating official content. The US Department of Commerce will develop guidance for content authentication and watermarking to label AI-generated content. And as for cybersecurity the order proposes development of AI tools to find and fix vulnerabilities in critical software.

Comments

Among comments in the US on Biden’s order, Michael Leach, Compliance Manager, Forcepoint, described it as providing ‘some of the necessary first steps to begin the creation of a national legislative foundation and structure to better manage the responsible development and use of AI by both commercial and government entities, with the understanding that it is just the beginning’. He spoke in terms of data privacy: “Since the introduction of global privacy laws like the EU GDPR, we have seen numerous US state level privacy laws come into effect across the nation to protect American’s privacy and many of these existing laws have recently adopted additional requirements when using AI in relation to personal data.” Eduardo Azanza, CEO, Veridas, called it ‘a monumental moment for the safe, secure and ethical development and use of AI. With Europe currently working on the EU AI Act, the US is looking to join the developing global precedent being established, which will determine how countries and organizations should approach AI. We will surely begin to see a cascading trend of similar legal actions.’

Michael Covington, VP of Strategy of Jamf, described the order as one of the most comprehensive approaches to governing this new technology seen to date. “As much as we may want to encourage organic and unconstrained innovation, it is imperative that some guardrails be established to ensure developers are mindful of any downstream effects, and that regulators are in place to help monitor for potential damages so they can be addressed before spiralling out of control.”

Photo by Mark Rowe; Enigma machine at Bletchley Park museum (closed this week for the summit).

Related News

  • Interviews

    Data first security

    by Mark Rowe

    IT perimeter security is flawed on many levels. Not only are businesses in every industry routinely breached but this model provides the…

Newsletter

Subscribe to our weekly newsletter to stay on top of security news and events.

© 2024 Professional Security Magazine. All rights reserved.

Website by MSEC Marketing