TESTIMONIALS

“Received the latest edition of Professional Security Magazine, once again a very enjoyable magazine to read, interesting content keeps me reading from front to back. Keep up the good work on such an informative magazine.”

Graham Penn
ALL TESTIMONIALS
FIND A BUSINESS

Would you like your business to be added to this list?

ADD LISTING
FEATURED COMPANY
Case Studies

MPs on AI

by Mark Rowe

The new AI Safety Institute has been unable to access some developers’ models to perform the pre-deployment safety testing that was intended to be a focus of its work. The Science, Innovation and Technology Committee of MPs in a report calls on the next Government to identify any developers that refused pre-access to their models — in contravention of the agreement at last November’s summit at Bletchley Park — and name them and report their justification for refusing.

As for whether the UK Government should bring forward AI-specific legislation, resolving this should be a priority for the next Government of whatever political colour after the July 4 general election, according to the committee. As the MPs’ report sets out, five high-level principles (safety, security and robustness; appropriate transparency and explainability; fairness; accountability and governance; and contestability and redress) underpin the Government’s approach and have begun to be translated into sector-specific action by regulators. The Government wants various sectoral regulators to put that ‘principles-based approach’ into practice, rather than make an AI-specific regulator.

The report questions whether regulators have the capacity; Ofcom, for example, already has ‘a broad new suite of powers conferred on it by the Online Safety Act 2023’. An AI Safety Institute was set up after PM Rishi Sunak’s AI Safety Summit at Bletchley Park in November 2023.

Chair of the committee Greg Clark said: “The overarching ‘black box’ challenge of some AI models means we will need to change the way we think about assessing the technology. Biases may not be detectable in the construction of models, so there will need to be a bigger emphasis on testing the outputs of model to see if they have unacceptable consequences.

“The Bletchley Park Summit resulted in an agreement that developers would submit new models to the AI Safety Institute. We are calling for the next government to publicly name any AI developers who do not submit their models for pre-deployment safety testing. It is right to work through existing regulators, but the next government should stand ready to legislate quickly if it turns out that any of the many regulators lack the statutory powers to be effective. We are worried that UK regulators are under-resourced compared to the finance that major developers can command.”

The report sets out a dozen ‘challenges’:

The Bias Challenge. ‘Developers and deployers of AI models and tools must not merely acknowledge the presence of inherent bias in datasets, they must take steps to mitigate its effects.’

The Privacy Challenge. ‘Privacy and data protection frameworks must account for the increasing capability and prevalence of AI models and tools, and ensure the right balance is struck.’

The Misrepresentation Challenge. ‘Those who use AI to misrepresent others, or allow such misrepresentation to take place unchallenged, must be held accountable.’

The Access to Data Challenge. ‘Access to data, and the responsible management of it, are prerequisites for a healthy, competitive and innovative AI industry and research
ecosystem.’

The Access to Compute Challenge. ‘Democratising and widening access to compute is a prerequisite for a healthy, competitive and innovative AI industry and research ecosystem.’

The Black Box Challenge. ‘We should accept that the workings of some AI models are and will remain unexplainable and focus instead on interrogating and verifying their outputs.’

The Open-Source Challenge. ‘The question should not be ‘open’ or ‘closed’, but rather whether there is a sufficiently diverse and competitive market to support the growing
demand for AI models and tools.’

The Intellectual Property and Copyright Challenge. ‘The Government should broker a fair, sustainable solution based around a licensing framework governing the use of copyrighted material to train AI models.’

The Liability Challenge. ‘Determining liability for AI-related harms is not just a matter for the courts—Government and regulators can play a role too.’

The Employment Challenge. ‘Education is the primary tool for policymakers to respond to the growing prevalence of AI, and to ensure workers can ask the right questions of the technology.’

The International Coordination Challenge. ‘A global governance regime for AI may not be realistic nor desirable, even if there are economic and security benefits to be won from international co-operation.’

The Existential Challenge. ‘Existential AI risk may not be an immediate concern but it should not be ignored, even if policy and regulatory activity should primarily focus on the here and now.’

Related News