Internet Explorer 11 (IE11) is not supported. For the best experience please open using Chrome, Firefox, Safari or MS Edge

MEPs in the European Parliament’s Internal Market Committee and the Civil Liberties Committee recently voted on amendments to the European Commission’s proposal on the EU Artificial Intelligence Act aiming to ensure that AI systems are overseen by people, and are safe, transparent, traceable, non-discriminatory and environmentally friendly. Passage of these amendments set the Act up for plenary adoption in the coming months.

We have set out below some of the key proposed changes to the Act by the Committee.

Definition of AI systems

Under Article 3(1) of the AI Act, an AI system:

“means a machine-based system designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate outputs such as predictions, recommendations, or decisions, that influence physical or virtual environments”.

This definition was adopted by MEPs in line with the OECD’s definition of an AI system and differs from the original Commission draft.1. This definition is narrower in scope than the Commission’s original proposal and is in line with what conservative political groupings in the European Parliament had been advocating for in the draft stages of the Act. Left-of-centre politicians have been pushing for a broader, more encompassing understanding of the technology and its outputs. However, it should be noted the definition may yet change as the Act continues through the legislative process.

Prohibited practices under Article 5

The EU AI Act sets out several prohibited applications of AI Systems which are considered harmful, such as “manipulative or deceptive techniques” and social scoring. The Committee has also proposed substantially amending the list to include bans on other practices which it considers intrusive and discriminatory such as:

  • Real-time” remote biometric identification systems in publicly accessible spaces
  • “Post” remote biometric identification systems, with the only exception being the ability for law enforcement to use the system for the prosecution of serious crimes and only after judicial authorization
  • Biometric categorisation systems using sensitive characteristics (e.g. gender, race, ethnicity, citizenship status, religion, political orientation)
  • Predictive policing systems (based on profiling, location or past criminal behaviour)
  • Emotion recognition systems in law enforcement, border management, workplace, and educational institutions, and
  • Indiscriminate scraping of biometric data from social media or CCTV footage to create facial recognition databases, violating human rights and right to privacy

This outright ban on several uses of biometric data follows intense lobbying from civil society groups and other EU bodies, who pushed for amendments to bolster protections for fundamental rights, with the EDPB and the EDPS among those who called for a total ban on biometric surveillance in public.

High risk categorisations and obligations under Annex III

The Act is designed to regulate AI systems on a sliding scale of risk, with four risk categories:

  • Unacceptable risk

  • High risk

  • Limited risk

  • Minimal or no risk

The Committee have made amendments to expand the category of high-risk areas to include harm to people’s health, safety, fundamental rights or the environment. AI systems deployed which seek to influence voters in elections/political campaigns, and in recommender systems used by social media platforms (known as VLOPs under the Digital Services Act) have also been added to the
high-risk list.

The previous draft of the Act contained significant compliance challenges for providers of those systems. The Committee has attempted to ensure that obligations for high-risk AI providers are now much more prescriptive, notably in risk management, data governance, technical documentation and record keeping. The Committee has also introduced a completely new requirement that deployers (previously called users) of high-risk AI solutions must conduct a fundamental rights impact assessment considering aspects such as the potential negative impact on the environment and on marginalised groups.

Transparency measures

Following the recent explosion of ChatGPT on to the marketplace, it is unsurprising that the Committee has included obligations for providers of foundation models to attempt to guarantee robust protection of fundamental rights, health and safety and the environment, democracy and the rule of law. This includes placing an obligation on those providers to take steps towards mitigating risks, complying with design, information and environmental requests, and registering in an EU database.

There will be additional transparency requirements for generative foundation models, such as Chat GPT or Google Bard, for example, disclosing the content was generated by AI, designing the model to prevent it from generating illegal content and publishing summaries of copyrighted data used for training.
In order to boost AI innovation, the Act also proposed to promote so-called “regulatory sandboxes”, which will be exceptions to the more onerous requirements for AI providers. This will include research activities and AI components which are provided under open-source licenses.

Next steps

Finally, MEPs reformed the role for the EU AI office, which will be the regulator for the Act, giving it more powers and which will supplement decentralised oversight of the regulation at EU level.

Before negotiations with the Council and Commission on the final form of the law can begin, this draft negotiating mandate needs to be endorsed by the whole Parliament, with the vote expected during the 12-15 June session.

For more information and expert advice on successfully navigating these changes, contact a member of our Artificial Intelligence team.

The content of this article is provided for information purposes only and does not constitute legal or other advice.



Share this: