The future of general-purpose AI is ‘remarkably uncertain’. At worst, it could bring ‘severe public safety dangers within the next few years’ such as ‘general-purpose AI-enabled terrorism’, or humanity could lose control over AI. That’s according to what’s been hailed as a first international, independent AI Safety Report.
The report covers risks, whether from malicious use by humans; due to ‘malfunctions’, such as bias; and ‘systemic’ ones such as to privacy, and copyright; the labour market (AI doing away with jobs), and ‘single points of failure’ whereby a failure of AI might disrupt an entire system, such as of critical infrastructure.
AI is already doing harm, the report acknowledges, such as violating privacy, and enabling scams, non-consensual intimate imagery (NCII) and child sexual abuse material (CSAM). As AI becomes more capable, potentially ever faster, ‘evidence of additional risks is gradually emerging, according to the report, such as ‘AI-enabled hacking or biological attacks’, and even society losing control of AI. Even how to assess and manage such risks is, in the words of the report, ‘nascent’, while advances in AI are unpredictable.
The report sums up that the future of general-purpose AI technology is uncertain, with a range of trajectories appearing to be possible even in the near future, ‘including both very positive and very negative outcomes’. The pace of advance in AI may even accelerate, the report notes.
Cyber
Briefly put, ‘AI can make it easier or faster for malicious actors of varying skill levels to conduct cyberattacks’, the report says, adding that ‘it remains unclear whether this will affect the balance between attackers and defenders’. In more detail, ‘AI tools reduce the human effort and knowledge needed to survey target systems and gain unauthorised access’. The authors see difficulties in assessing the attacker-defender balance, and how government will regulate research into ‘offensive AI’.
Code
Even if AI-driven detection catches vulnerabilities in new code before it reaches production, much ‘legacy code’ has not been scrutinised by advanced AI tools, leaving potential vulnerabilities undetected, according to the report.
Chemical weapons
As for whether AI might aid in giving guidance for reproducing known biological and chemical weapons and to facilitate the design of novel toxic compounds, towards the producing of such weapons, the authors say that ‘assessment of biological and chemical risk is difficult because much of the relevant research is classified’.
Among questions raised by the report: how to protect whistleblowers, who raise concerns about AI risks; how and where to report incidents; and how to draw up a ‘risk register’ as a repository of AI risks. The report notes that ‘risk tolerance is often left up to AI developers and deployers to determine for themselves’.
What they say
In a February 14 speech at an annual national security conference in Munich, Secretary of State at the Department for Science, Innovation, and Technology (DSIT), Peter Kyle announced a name change to the UK’s AI Safety Institute, to the AI Security Institute. He said: “The work of the AI Security Institute won’t change, but this renewed focus will ensure our citizens – and those of our allies – are protected from those who would look to use AI against our institutions, democratic values, and way of life.
“The main job of any government is ensuring its citizens are safe and protected, and I’m confident the expertise our Institute will be able to bring to bear will ensure the UK is in a stronger position than ever to tackle the threat of those who would look to use this technology against us.”
In the speech, before the report’s publication, Mr Kyle argued that security and innovation go hand in hand. He said: “AI is a powerful tool and powerful tools can be misused. State-sponsored hackers are using AI to write malicious code and identify system vulnerabilities, increasing the sophistication and efficiency of their attacks.
“Criminals are using AI deepfakes to assist in fraud, breaching security by impersonating officials. Last year, attackers used live deepfake technology during a video call to mimic bank officials. They stole $25m.
“And now we are seeing instances of people using AI to assist them in planning violent and harmful acts. These aren’t distant possibilities. They are real, tangible harms, happening right now. The implications for our people could be pervasive and profound.”
Among those behind the report was Chris Johnson, Chief Scientific Adviser at DSIT.
Visit https://www.gov.uk/government/publications/international-ai-safety-report-2025.




