The EU and the UK are in the final throes of preparing laws to govern the use of AI – but they have taken different approaches. Managing Consultant of AMR CyberSecurity Jordan Orlebar describes the landscape, and what the new Acts aim to achieve.
Artificial Intelligence (AI) is bringing unprecedented technological progress, but it is also raising pressing ethical, privacy, and security questions that are challenging traditional regulatory paradigms. At this critical juncture, it’s useful to take the time to explore the divergent paths being taken by the European Union (EU) and the UK in framing regulations to govern AI.
Prescriptive versus flexibility
The EU’s AI Act represents a groundbreaking attempt to create a holistic legal structure that meticulously categorises AI applications into different risk tiers. It mandates rigorous compliance protocols for those applications deemed high-risk, to safeguard fundamental human rights and societal values. In contrast, the UK’s strategy shuns prescriptive rules in favour of a more dynamic, principles-based framework, which emphasises flexibility, fostering an environment conducive to innovation while still upholding ethical standards and public trust.
EU approach
The EU’s AI Act – the first-ever comprehensive legal framework on AI worldwide – categorises AI systems into four risk levels: Unacceptable, High, Limited and Minimal. Overseen and enforced by the recently-formed European AI Office, the Act looks set to enter into force 20 days after publication in the EU ‘Official Journal’ (expected in May or June 2024). Most of its provisions will become applicable two years after the AI Act enters into force, likely in 2026.
However, the Act’s provisions that relate to prohibited AI systems will apply after six months, while provisions regarding generative AI will apply after 12 months. The EU AI Act focuses on imposing obligations on high-risk AI applications, including comprehensive risk management systems, stringent data governance, transparency measures and adherence to ethical standards. It emphasises the protection of fundamental rights and safety, with significant fines for non-compliance.
In the EU, AI system creators, especially those classified as high-risk, will need to ensure compliance with strict regulations outlined in the AI Act. This includes conducting thorough risk assessments, implementing robust data governance protocols, ensuring transparency and traceability of AI systems, and adhering to specific technical standards and documentation requirements. Creators must also register high-risk AI systems in an EU database. To enable the transition to the new regulatory framework, the Commission has launched the AI Pact, a voluntary initiative that seeks to support the future implementation, inviting AI developers from Europe and beyond to comply with the key obligations of the AI Act ahead of time.
EU Act risk levels
The risk levels are broadly defined as follows:
Unacceptable: The use of technologies in the unacceptable risk category is prohibited with little exception, including real-time facial and biometric identification systems in public spaces, China-like systems of social scoring, subliminal techniques to distort behaviour and technologies that exploit vulnerabilities of certain populations.
High-risk: Critical infrastructure, employment and management of workers, law enforcement and democratic processes are a few examples of what’s considered high-risk within the confines of the Act.
Limited Risk: These pose a lower risk but have some transparency obligations. For example, requiring that individuals must be informed if they are engaging with a chatbot. In short, providers must ensure that AI-generated content is identifiable – which will help in the fight against deep fakes.
Minimal Risk: Examples here include AI-enabled video games or spam filters, which constitute a majority of use cases across the EU.
In the EU, non-compliance with the AI Act may result in fines of up to 30 million euros or 6pc of the total worldwide annual turnover for companies, depending on the severity of the infringement.
UK approach
In contrast, the UK’s regulatory framework for AI is characterised by its adaptability and emphasis on fostering innovation. It promotes a principles-based approach, focusing on the safety, transparency, fairness, accountability and contestability of AI systems. The aim is to create a regulatory environment that supports growth and innovation while addressing the ethical and societal impacts of AI technologies.
Creators are encouraged to adopt AI in a way that aligns with these principles, fostering innovation while also ensuring public trust. The UK government emphasises the importance of ethical AI development, offering guidance and frameworks to support creators in implementing these principles effectively. Regulators will enforce measures to ensure AI systems function correctly and are technically secure throughout their lifecycles.
The principles cover five main areas outlined as follows:
Safety, security and robustness
AI systems should be reliable, secure and safe throughout their entire lifespan. Risks associated with their use should be identified, evaluated and controlled continuously. Regulators may need to enforce certain measures on the entities they regulate, to ensure AI systems function correctly and are technically secure and reliable.
Appropriate transparency and explainability
AI systems must be transparent and explainable at an appropriate level. This refers to providing details regarding its purpose, usage and timing. Explainability means relevant parties can access, interpret and understand the decision-making processes of an AI system. The degree of transparency and explainability required should be proportional to the risks associated with the AI system.
Fairness
AI systems should not violate the legal rights of individuals or organisations, exhibit unfair discrimination towards individuals or lead to unjust market outcomes. Regulators may need to create guidelines and examples of fairness, applicable to AI systems and develop instructions that consider pertinent laws, regulations, technical standards and assurance techniques.
Accountability and governance
Effective measures of governance must be implemented to oversee the supply and use of AI systems. There needs to be unambiguous accountability established throughout the AI lifecycle. Regulators will be expected to explore strategies to guarantee clear standards for regulatory compliance and best practices are placed on relevant actors in the AI supply chain. Additionally, they may need to implement governance processes to ensure these standards are consistently met.
Contestability and redress
Where appropriate, users, impacted third parties and actors in the AI lifecycle should be able to contest an AI decision or outcome that is harmful or creates material risk of harm. Regulators will be expected to clarify existing routes to contestability and redress and implement proportionate measures to ensure the outcomes of AI use are contestable where appropriate.
The UK’s approach to enforcement will rely on regulators with additional monitoring functions as support from the central government. Penalties in the UK – imposed via the Information Commissioner’s Office (ICO) – may be substantial, aligning with GDPR, which allows for fines up to £17.5 million or 4 per cent of annual global turnover, whichever is greater, for serious breaches.
Those developing AI systems within the EU will need to adapt to a well-defined regulatory framework, particularly for applications deemed high-risk. Meanwhile, in the UK, developers are prompted to pursue a more adaptable methodology, adhering to guidelines designed to foster the safe and ethical advancement of AI technologies.
About the firm
AMR CyberSecurity recently became ISO 14001 certified; 14001 is the international standard for environmental management systems (EMS). Visit https://www.amrcybersecurity.com/.





