AI content generation systems that such systems must be developed and used in accordance with applicable legal frameworks, including data protection and privacy, according to a Joint Statement on AI-Generated Imagery by national data privacy regulators.
At the UK regulator the ICO, William Malcolm, Executive Director Regulatory Risk and Innovation, said: โPeople should be able to benefit from AI without fearing that their identity, dignity or safety are under threat. AI already plays a large role in all our lives, and everybody has a right to expect that AI systems handling their personal data will do with respect. Responsible innovation means putting people first: anticipating the risks and building in meaningful safeguards to ensure autonomy, transparency, and control.
โPublic trust is foundational to the successful adoption and use of AI. Joint regulatory initiatives like this show global commitment to high standards of data protection in AI systems and help provide regulatory certainty. We expect those developing and deploying AI to act responsibly. Where we find that obligations have not been met, we will take action to protect the public.โ
The statement says that developers and users of AI content generation systems should make safeguards ‘to prevent the misuse of personal information and generation of non-consensual intimate imagery and other harmful materials’, particularly where children are shown. The nations signing include the UK and Republic of Ireland; not the United States.
Comments
Graeme Stewart, Head of Public Sector at Check Point Software, says international cooperation is welcome but insufficient without meaningful enforcement. He says: “International collaboration around AI governance and safety standards is crucial for protecting individuals from harmful content and reducing cyber risk. However, whilst signed statements of intent are a noble gesture, they will do little to reassure the thousands of victims of AI-related crimes who suffer every day in this increasingly dangerous digital world. Moving forward, there needs to be a dedicated global task force in place to protect the innocent and punish the criminals exploiting this technology for their own sinister ends.”
And Chris Linnell, Associate Director of Data Privacy at the cyber firm Bridewell, said: โPeople are right to be worried about AI misuse, but many do not realise they are agreeing to usage of their data by accepting terms and conditions when using publicly available AI products. There is a dangerous mismatch between public concern and legal reality. If people do not read the terms, they do not understand the risks or where responsibility truly lies.
He pointed to a recent survey by the firm that found less than half (47pc) of the public would be willing to engage with free training from the government.ย โTraining matters, but it cannot compensate for unread terms, unclear legal frameworks and a culture that treats AI risk as someone elseโs problem. If people will not engage with free training, then stronger safeguards, clearerย (enforced)ย regulation and far greater transparency from platforms are essential.”




