The European Parliament’s Internal Market Committee and the Civil Liberties Committee recently proposed substantial amendments to the European Commission’s proposal for an EU AI Act. The committees proposed amending the list of prohibited practices and have also sought to provide more prescriptive clarity on what obligations will be placed on AI Systems developers. The committees also proposed new obligations specifically on “generative foundation models”, like ChatGPT. Brian McElligott, Head of our AI team highlights some of the key changes proposed.
On 11 May 2023, MEPs in the European Parliament’s (Parliament) Internal Market Committee and the Civil Liberties Committee (Committee) voted on amendments to the European Commission’s proposal on the EU Artificial Intelligence Act (Act). These amendments were made with the aim of ensuring that AI systems are overseen by people, and are safe, transparent, traceable, non-discriminatory and environmentally friendly.
These amendments were adopted by the European Parliament with a substantive voting majority on Wednesday 14th June 2023, paving the way for the Trilogue process which, upon its conclusion, will result in the creation of the European Union’s comprehensive law on Artificial Intelligence.
We have set out below some of the key proposed changes to the Act by the European Parliament following this vote.
Definition of Artificial Intelligence System
The Parliament has adopted a definition of an Artificial Intelligence (AI) System in line with the OECD’s definition of an AI system, which differs from the original Commission draft). An AI System has now been defined as:
“…a machine-based system designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate outputs such as predictions, recommendations, or decisions, that influence physical or virtual environments.”
This definition is narrower in scope than the European Commission’s original proposal. It is in line with what conservative political groupings in the European Parliament had been advocating for in the draft stages of the Act. Left-of-centre politicians have been pushing for a broader, more encompassing understanding of the technology and its outputs.
It should be noted that the definition may yet change as the Act continues through the Trilogue legislative process.
The draft Act sets out several prohibited applications of AI Systems which they consider to be harmful, such as “manipulative or deceptive techniques” and social scoring. The Parliament has now also now proposed substantially amending the list to include bans on other practices which it considers intrusive and discriminatory such as:
- “Real-time” remote biometric identification systems in publicly accessible spaces.
- “Post” remote biometric identification systems deployed for the analysis of recorded footage for publicly accessible spaces. However, the Act retains the ability for law enforcement to use the system for the prosecution of serious crimes, only after judicial authorisation.
- Biometric categorisation systems using sensitive characteristics eg gender, race, ethnicity, citizenship status, religion, political orientation.
- Predictive policing systems based on profiling, location or past criminal behaviour.
- Emotion recognition systems in law enforcement, border management, workplace, and educational institutions.
- Indiscriminate scraping of biometric data from social media or CCTV footage to create facial recognition databases, violating an individual’s human rights and right to privacy.
This outright ban on several uses of biometric data follows intense lobbying from civil society groups and other EU bodies, who pushed for amendments to bolster protections for fundamental rights, with the European Data Protection Board and the European Data Protection Supervisor among those who called for a total ban on biometric surveillance in public.
The outright ban on the use of “Real-time” remote biometric identification systems was the most hotly disputed issue in the Parliamentary debates, with the centre-right European People’s Party (EEP) MEP grouping attempting to introduce derogations to the real-time ban for exceptional circumstances such as terrorist attacks. However, this attempted amendment from the EPP was not successful.
High Risk Categorisations and Obligations:
The Act is designed to regulate AI systems on a sliding scale of risk, with four risk categories:
- Unacceptable risk
- High risk
- Limited risk
- Minimal or no risk
The Parliament have made amendments to expand the category of high-risk areas to include harm to people’s health, safety, fundamental rights or the environment. AI systems deployed which seek to influence voters in elections/political campaigns, and in recommender systems used by social media platforms, known as VLOPs under the Digital Services Act have also been added to the high-risk list.
The Commission’s previous draft of the Act contained significant compliance challenges for providers of those systems. The Parliament’s amendments have attempted to ensure that obligations for high-risk AI providers are now much more prescriptive, notably in risk management, data governance, technical documentation and record keeping.
The Parliament has also introduced a completely new requirement that “deployers”, previously called “users”, of high-risk AI solutions must conduct a “fundamental rights impact assessment” considering aspects such as the potential negative impact on the environment and on marginalised groups. See our tech blog post about the four categories of risk under the Act here.
Measures Specific to Generative AI Systems
Following the recent explosion of Large Language Models (LLM) on to the marketplace, it is unsurprising that the Parliament has included obligations for providers of foundation models to attempt to guarantee robust protection of fundamental rights, health and safety and the environment, democracy and the rule of law. This includes placing an obligation on those providers to take steps towards mitigating risks, complying with design, information, and environmental requests, and registering in an EU database. The obligations include:
- Registering the LLM in the appropriate EU database
- Being able to demonstrate how the product has and continues to mitigate/foresee risk to health, safety, fundamental rights, the environment and democracy,
- Implementing appropriate data governance measures
- Being able to demonstrate how the model has been designed in an energy efficient manner, and
- Creating a quality management system
There will also be additional transparency requirements for generative foundation models such as Chat GPT, for example, disclosing the content was generated by AI, designing the model to prevent it from generating illegal content and publishing summaries of copyrighted data used for training.
In order to boost AI innovation, the Act also proposes to promote so-called “regulatory sandboxes”, which will be exceptions to the more onerous requirements for AI providers. This will include research activities and AI components which are provided under open-source licenses.
MEPs will immediately enter interinstitutional negotiations with the EU Council of Ministers, representing European governments, and the European Commission in the Trilogue process. The negotiations are likely to intensify once Spain takes over the rotating presidency of the Council in July.
Upon conclusion of the Trilogue process, the European Union’s comprehensive law on Artificial Intelligence will subsequently be adopted.
For more information on the likely impact of the EU AI Act's commencement or on issues relating to the role of AI in your organisation, contact a member of our Artificial Intelligence team.
The content of this article is provided for information purposes only and does not constitute legal or other advice.
Which read: “‘artificial intelligence system’ (AI system) means software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with;”