The European Union is nearing an Artificial Intelligence Act, that would prohibit AI systems with an ‘unacceptable’ level of risk to people’s safety. MEPs have amended the proposed law to include bans on intrusive and discriminatory uses of AI systems such as:
“Real-time” remote biometric identification systems in publicly accessible spaces;
“Post” remote biometric identification systems, with the only exception of law enforcement for the prosecution of serious crimes and only after judicial authorisation;
Biometric categorisation systems using sensitive characteristics (such as, gender, race, ethnicity, citizenship status, religion, political orientation);
Predictive policing systems (based on profiling, location or past criminal behaviour);
Emotion recognition systems in law enforcement, border management, workplace, and educational institutions; and
Indiscriminate scraping of biometric data from social media or CCTV footage to create facial recognition databases (violating human rights and right to privacy).
MEPs expanded the classification of high-risk areas to include harm to people’s health, safety, fundamental rights or the environment. They also added AI systems to influence voters in political campaigns and in recommender systems used by social media platforms (with more than 45 million users under the Digital Services Act) to the high-risk list.
After the vote, co-rapporteur Brando Benifei (S&D Party, Italy) said: “We are on the verge of putting in place landmark legislation that must resist the challenge of time. It is crucial to build citizens’ trust in the development of AI, to set the European way for dealing with the extraordinary changes that are already happening, as well as to steer the political debate on AI at the global level. We are confident our text balances the protection of fundamental rights with the need to provide legal certainty to businesses and stimulate innovation in Europe.”
Background
While uptake of Artificial Intelligence (AI) systems has potential to bring societal benefits, economic growth and innovation (and global competitiveness for the EU’s members), characteristics of some AI systems have led to some concerns with regard to safety, security and fundamental human rights, and such questions as: what of citizens’ right to file complaints about discriminatory AI systems, and what of AI systems’ influence on voters in political campaigns? Hence the European Commission published in February 2020 a White Paper on Artificial Intelligence and proposed to set up a European regulatory framework for ‘trustworthy’ AI. Proposed was a risk-based approach with four levels of risks:
Unacceptable risk AI. Harmful uses of AI that contravene EU values (such as social scoring by governments) will be banned because of the unacceptable risk they create;
High-risk AI. A number of AI systems (listed in an Annex) that are creating adverse impact on people’s safety or their fundamental rights are considered to be high-risk. To ensure trust and consistent high level of protection of safety and fundamental rights, a range of mandatory requirements (including a conformity assessment) would apply to all high-risks systems;
Limited risk AI. Some AI systems will be subject to a limited set of obligations (e.g. transparency);
Minimal risk AI. All other AI systems can be developed and used in the EU without additional legal obligations than existing legislation.
What next
Before negotiations with the Council on the final form of the law can begin, this draft negotiating mandate needs to be endorsed by the whole Parliament, with the vote expected during the June 12 to 15 session. For background, see the europa.eu website.
Comment
Keiron Holyome, VP UKI, Eastern Europe, Middle East and Africa at BlackBerry, said: “BlackBerry’s research suggests that any efforts towards regulation of AI technologies will be well received by the tech community, here and around the world. Half of IT professionals predict that we are less than a year away from a successful cyberattack being credited to ChatGPT, and 71pc believe that foreign states are likely to already be using the technology for malicious purposes. Therefore, the big question is whether the legislation can be pervasive enough to offer any peace of mind or protection against the growing generative AI threats that most concern those with responsibility for cybersecurity.
“The reality is that AI is already – and will continue – reshaping the way that cybercriminals develop more specialised skills, develop more successful phishing attacks, spread disinformation quicker and create more effective malware. Though 92pc of IT professionals believe governments have a responsibility to regulate advanced technologies, such as ChatGPT, many will acknowledge that even the most watertight legislation can’t change reality: as the maturity of the platform and the hackers’ experience of putting it to use progress, it will get more and more difficult for organisations and institutions to raise cybersecurity defences without using AI in the fight against AI.”





