Case Studies

UK, US sign AI MOU for safety testing

by Mark Rowe

The UK and United States have signed a Memorandum of Understanding (MOU) towards working together to develop tests for the most advanced artificial intelligence (AI) models. This leads on from the AI Safety Summit in Bletchley Park in the UK last year. It was signed by DSIT (Department for Science, Innovation and Technology) minister Michelle Donelan for the UK; and for the US Commerce Secretary Gina Raimondo.

The two countries’ AI Safety Institutes have plans to build a common approach to AI safety testing. Michelle Donelan said: “This agreement represents a landmark moment, as the UK and the United States deepen our enduring special relationship to address the defining technology challenge of our generation. We have always been clear that ensuring the safe development of AI is a shared global issue. Only by working together can we address the technology’s risks head on and harness its enormous potential to help us all live easier and healthier lives.

“The work of our two nations in driving forward AI safety will strengthen the foundations we laid at Bletchley Park in November, and I have no doubt that our shared expertise will continue to pave the way for countries tapping into AI’s enormous benefits safely and responsibly.”

Meanwhile in the US, Secretary of Homeland Security Alejandro Mayorkas and Chief Information Officer and Chief Artificial Intelligence Officer Eric Hysen announced the Department of Homeland Security’s (DHS) first “Artificial Intelligence Roadmap”, including testing of AI in investigative processes for detecting fentanyl and increasing efficiency of investigations related to child sexual exploitation; by the Federal Emergency Management Agency (FEMA), deploying AI to develop hazard mitigation plans; and, United States Citizenship and Immigration Services (USCIS) will use AI on immigration officer training.

Comments

Eleanor Watson, a member of the IEEE technical professional body, is AI ethics engineer and AI Faculty at Singularity University (and one of the first signatories for the Future of Life Institute’s Open Letter on AI). She says: “Hopefully, this will provide a chance to build upon the foundations already laid. As ethical considerations surrounding AI become more prominent, it is important to take stock of where the recent developments have taken us, and to meaningfully choose where we want to go from here. The responsible future of AI requires vision, foresight and courageous leadership that upholds ethical integrity in the face of more expedient options.

“Explainable AI, which focuses on making machine learning models interpretable to non-experts, is certain to become increasingly important as these technologies impact more sectors of society. That’s because both regulators and the public will demand the ability to contest algorithmic decision-making. While these subfields offer exciting avenues for technical innovation, they also address growing societal and ethical concerns surrounding machine learning.”

Ayesha Iqbal, IEEE senior member is an engineering trainer at the Advanced Manufacturing Training Centre. She says: “AI has significantly evolved in recent years, with applications in almost every business sector. In fact, it is expected to see a 37.3 percent annual growth rate from 2023 to 2030. However, there are some barriers preventing organisations and individuals from adopting AI, such as a lack of skilled individuals, complexity of AI systems, lack of governance and fear of job replacement. AI is growing faster than ever before – and is already being tested and employed in sectors including education, healthcare, transportation and data security. As such, it’s time that the Government, tech leaders and academia work together to establish standards for the safe, responsible development of AI-based systems. This way, AI can be used to its full potential for the collective benefit of humanity.”

Related News

  • Case Studies

    Radiation detection

    by Mark Rowe

    Nations can regulate the movement of animals, goods and people through the measures which are followed by the border control forces of…

  • Case Studies

    Christmas markets

    by Mark Rowe

    Manchester’s Christmas market is the largest of its kind in Europe outside of Germany, featuring hundreds of stalls across multiple locations, plus…

  • Case Studies

    Insurer on cyber

    by Mark Rowe

    The specialist insurer Hiscox dealt with over 1000 cyber-related insurance claims from businesses over the past 12 months, the insurance company says…

Newsletter

Subscribe to our weekly newsletter to stay on top of security news and events.

© 2024 Professional Security Magazine. All rights reserved.

Website by MSEC Marketing