Internet Explorer 11 (IE11) is not supported. For the best experience please open using Chrome, Firefox, Safari or MS Edge

Political Agreement on AI Act

After lengthy and intensive negotiations, the European Parliament and the Council of the European Union eventually reached political agreement on the European Commission’s proposal for a regulation on artificial intelligence – the AI Act on Friday, 8 December. Touted as the world’s first legislative regulation of AI, we expect that the final agreed text of the AI Act will not be published for a number of months yet. However, we take this opportunity to identify and reflect on some of the key points which were agreed on during the Trilogue.

Prohibited AI

The co-legislators agreed that certain applications of AI would be prohibited in the EU. These include:

  • Biometric categorisation systems that use sensitive characteristics,
  • Untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases, such as Clearview AI,
  • Emotion recognition in the workplace and educational institutions,
  • Social scoring based on social behaviour or personal characteristics, and
  • AI used to exploit the vulnerabilities of people are prohibited

However, there will be a series of narrow exceptions for the use of biometric identification systems in publicly accessible spaces for law enforcement purposes, subject to prior judicial authorisation. Similar exceptions were originally provided for in the Commission’s proposal for an AI Act but were removed by the Parliament in June this year. Their reintroduction implies a genuine need for law enforcement to use these systems, albeit in limited circumstances.

High-risk AI

One of the major additions of the Parliament’s amendments to the Commission’s original proposal was the introduction of a fundamental rights impact assessment for high-risk AI systems. At the end of the Trilogue, the Parliamentarians were successful in ensuring that this remained part of the AI Act. In addition, EU citizens will have the right to launch complaints about AI systems and receive explanations about decisions based on high-risk AI systems that impact their rights. However, the precise parameters and contours of these rights are yet to be established.

General-purpose AI

In addition to transparency requirements initially proposed by the Parliament this summer, general-purpose AI systems “with systemic risk” may rely on codes of conduct to comply with the AI Act before harmonised EU standards are adopted. Those standards will concern the obligations to:

  • Conduct model evaluations
  • Assess and prevent systemic risks
  • Conduct adversarial testing
  • Report to the Commission on serious incidents
  • Ensure cybersecurity, and
  • Report on their energy efficiency

While the information currently available regarding these measures is currently scant, at a high-level they appear similar but less stringent than those initially proposed by the Parliament earlier this year.


According to the Parliament, non-compliance with the proposed AI Act can lead to fines ranging from €7.5 million or 1.5 % of turnover to €35 million or 7% of global turnover, depending on the infringement and size of the company. To put these figures in context, the maximum fines under the GDPR are €20 million or 4% of worldwide turnover. If the fines handed down by national supervisory authorities under the GDPR are anything to go by, fines under the AI Act could be even more significant.

For more information and expert advice, contact a member of our Artificial Intelligence team.

People also ask

What is in the AI Act?

The AI Act is a proposal by the EU to regulate artificial intelligence across Europe. It consists of a risk-based approach to AI, setting down rules designed to minimise the potential harms of the use and deployment of AI in a wide range of circumstances.

What is the status of the AI Act?

A political agreement has now been reached on the AI Act. However, the final text of the AI Act is not expected for several months and will not be in effect for some time after that.

When will the AI Act come into force/effect?

The AI Act will apply two years after it enters into force, shortened to six months for the bans. Requirements for AI models, the conformity assessment bodies, and the governance chapter will start applying one year earlier.

The content of this article is provided for information purposes only and does not constitute legal or other advice.

Share this: