Government

EU votes for Artificial Intelligence Act

by Mark Rowe

The European Parliament has approved an Artificial Intelligence Act. The regulation, agreed in negotiations with member states in December 2023, was endorsed by MEPs in Strasbourg with 523 votes in favour, 46 against and 49 abstentions.

Forbidden will be AI applications that threaten citizens’ rights, such as biometric categorisation systems based on sensitive characteristics and untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases; emotion recognition in the workplace and schools, social scoring, predictive policing (when it is based solely on profiling a person or assessing their characteristics), and AI that manipulates human behaviour or exploits people’s vulnerabilities.

Use of biometric identification systems (RBI) by law enforcement is prohibited in principle, except in defined situations. “Real-time” RBI can only be deployed if safeguards are met, such as, its use is limited in time and place. The law will take up to two years to come in, in EU member countries; except that bans on prohibited practises will apply six months after the entry into force date; codes of practise (nine months after entry into force); general-purpose AI rules including governance (12 months after entry into force); and obligations for high-risk systems (36 months).

In a debate, the Internal Market Committee co-rapporteur Brando Benifei (S&D, Italy) said: “We finally have the world’s first binding law on artificial intelligence, to reduce risks, create opportunities, combat discrimination, and bring transparency. Thanks to Parliament, unacceptable AI practices will be banned in Europe and the rights of workers and citizens will be protected. The AI Office will now be set up to support companies to start complying with the rules before they enter into force. We ensured that human beings and European values are at the very centre of AI’s development.”

Comments

Sabeen Malik, VP Global Government Affairs and Public Policy at the cyber firm Rapid7, described the Act as a helpful step in promoting a risk-based approach to AI regulation. “The EU’s approach will certainly go on to inspire other nations’ approaches to AI regulation; however, in the UK, I don’t think they will be as focused on the aspects relating to ‘foundational models’ as they already have those principles in place.

“Industry self-regulations and best practices are likely where you will see the US and UK sit. They will aim to strike the balance of flexible regulation and pro innovation, as well as cautiously follow AI use cases that won’t be regulated by other laws, such as data protection, consumer protection, product safety, and equality law.

“Regardless of the efforts to regulate specific harms, when it comes to cybersecurity and AI, there will still be an ongoing need to understand where vulnerabilities in AI systems can be used to harm individuals and businesses. We must continue to recognise not only the harm from the user perspective, but also the risk of the AI tech stack — and where both of these combined are most likely to cause societal issues.

“For the foreseeable future, people will almost exclusively be interacting with AI outputs through AI applications. This ‘application’ layer is where the safety and rights of people will be most impacted. Therefore, lawmakers should ensure that regulations which currently govern AI conduct and the models themselves also apply, with equal force, to those who provide or deploy AI applications and services to keep them safe for end users.”

Curtis Wilson, staff data scientist at the Synopsys Software Integrity Group, said: “The greatest problem facing AI developers is not regulation, but a lack of trust in AI. For an AI system to reach its full potential it needs to be trusted by the people who use it. Internally, we have worked hard to build this trust using rigorous testing regimes, continuous monitoring of live systems and thorough knowledge sharing sessions with end users to ensure they understand where, when and to what extent each system can be trusted.

“Externally though, I see regulatory frameworks, like the EU AI Act, as an essential component to building trust in AI. The strict rules and punishing fines will deter careless developers, and help costumers be more confident in trusting and using AI systems.

“The Act itself is mostly concerned with regulating high risk systems and foundational models. However, many of the requirements already align with data science best practices such as risk management, testing procedures and thorough documentation. Ensuring that all AI developers are adhering to these standards is to everyone’s benefit.”

And Mark Jow, EMEA Technical Evangelist, at the cyber firm Gigamon, raised Shadow AI as the latest addition to overall ‘Shadow IT’, which poses risk to organisations by introducing new back doors, footholds and data storage sites that Security is unaware of, and therefore unable to factor into their overall security posture. “This means these applications are used without the appropriate security tools and authentication levels on devices. Purchased without being formally reported to the organisation, shadow technologies often occur because users are unaware of their responsibility, feel it isn’t important, or are sometimes purchased under personal expense claims.

“While unbudgeted costs can be a challenge of any Shadow IT, but some of the biggest risks occur when employees and departments are leveraging less official, unpaid technologies. The current AI landscape offers users free programs that generally carry a higher level of security risk and are currently (largely) unregulated. In addition to unauthorised access risk, Shadow AI poses wider data protection risks. For example, how does the business know what potential proprietary, confidential, or private information is being provided to the AI solution in order for it to formulate decisions? Is the AI solution provided by a ‘trusted’ and reputable provider from a trusted nation state, or a corporation with a good history of data protection?

“Shadow AI also presents a reputational risk to organisations. Decisions and actions recommended by any Shadow AI solution may not be presented to the business, leadership, managers, or supervisors as being AI derived. Instead, users may represent these as ‘original’ thoughts. Not only does that create an ethical challenge for the employee, but it also risks bypassing the checks and balances that an organisation might usually apply to AI based insights.”

Related News

Newsletter

Subscribe to our weekly newsletter to stay on top of security news and events.

© 2024 Professional Security Magazine. All rights reserved.

Website by MSEC Marketing