Internet Explorer 11 (IE11) is not supported. For the best experience please open using Chrome, Firefox, Safari or MS Edge

Regulating AI in the EU

The value of the global Artificial Intelligence (AI) market in 2021 has been estimated at $327.5 billion, a figure that is expected to increase significantly over the coming years. However, the powerful and disruptive technologies making up this broad category arguably demand bold shifts in thinking by policy makers. EU regulators are therefore seeking to keep pace with a constantly evolving group of advanced computing techniques with pioneering new legislation. We set out the background to EU proposals for an AI regulatory framework and examine the most recent developments.

Regulating AI in the EU

The journey to a bespoke EU regulatory framework for AI began in earnest with the publication by the European Commission (EC) of a White Paper on AI in February 2020. This White Paper made various recommendations which fed into three resolutions on AI addressing ethics, civil liability and intellectual property. These recommendations were published by the European Parliament (EP) in October 2021.

This in turn led to the publication by the EC of a draft proposal for a Regulation Laying Down Harmonised Rules on Artificial Intelligence and Amending Certain Union Legislative Acts (the Proposal) in April 2021. The Slovenian presidency of the EU Council presented a draft compromise text in November 2021 (the Compromise Text). The Compromise Text contained a number of proposed amendments to the draft text of an AI Regulation contained within the Proposal. These were followed by various further proposals from the French Presidency of the EU Council.

Under a joint committee procedure, a number of opinions from various EP committees, such as the Committee on Legal Affairs (JURI), the Committee on Industry, Research and Energy (ITRE) and the Committee on Transport and Tourism (TRAN), were published throughout March and April of this year with various further suggested amendments to the Proposal (the Committee Opinions). Most recently, a draft report from the Committee on Internal Market and Consumer Protection (IMCO) and the Committee on Civil Liberties, Justice and Home Affairs (LIBE) was issued in April, with an 18 May deadline for further amendments.

The Proposal

The Proposal, also analysed in our previous article available here, aims to provide a comprehensive regulatory framework for AI, with the stated goal of making “Europe fit for the digital age and turning the next ten years into the Digital Decade”. The EC has set out to do this by developing a software-based definition of AI that is intended to be as future-proofed as possible. Its scope is also very broadly cast, with the proposed new rules applying to:

  • ‘Providers’ of AI systems irrespective of whether they are established within the EU or in a third country
  • ‘Users’ of AI systems located in the EU, and
  • Providers and users located in third countries “where the output produced by the system is used in the EU”

The Proposal follows a proportionate risk-based approach with the introduction of three risk categories:

  • Prohibited AI systems, which are forbidden
  • High-risk AI systems, which are permitted subject to the various requirements set out in the Proposal, and
  • Low-risk AI systems, where self-regulation is encouraged, and transparency obligations are imposed where certain conditions are met.

The EC further proposes that primary supervision and surveillance responsibilities will fall to national competent authorities. However, a planned European Artificial Intelligence Board would facilitate the implementation of AI rules at EU-level. The Board would implement the rules by issuing opinions, recommendations, advice and guidance on implementation such as technical specifications and the application of existing standards to the requirements set out in the Proposal.

Compromise Text and Committee Opinions

The Compromise Text and the Committee Opinions suggest various significant amendments to the Proposal, some of which are highlighted below.

Definitions

Amendments have been proposed to a number of existing definitions in the draft text, as well as the inclusion of new definitions for currently undefined terms such as ‘personal data’. More fundamentally, the EU Council and the EP have each proposed amendments to the definition of ‘AI Systems’, which highlights one of the key challenges inherent in regulating this dynamic technology:

  • Some of the EP committees, such as the Committee on Legal Affairs and the Committee on Industry, Research and Energy, seek to replace the word “software” with “machine-based system”. The Committee on Transport and Tourism, on the other hand, propose “a system that receives machine- and/or human-based data”. The Committee Opinions also propose the inclusion of a reference to how outputs generated by an AI System can influence the environments that the AI system interacts with. Certain other of the Committee Opinions, such as the Draft Opinion by the Committee on Industry, Research and Energy dated 3 March 2022, further seek to clarify that those environments can be real or virtual. Many of the EP committees also seek recognition that AI systems can be designed to operate with varying levels of autonomy.
  • The Compromise Text explicitly proposes that any relevant system should be capable of inferring how to achieve a given set of human-defined objectives by learning, reasoning or modelling implemented with the techniques and approaches listed in Annex I.

Classification

The EU Council has sought to rewrite the Proposal’s Article 6 classification rules to “clarify the logic behind the classification for high-risk AI systems” and its interaction with some of the Annexes to the Proposal. It has also sought to update the list of areas relevant to the classification of high-risk AI systems in Annex III, including the addition of AI systems intended to be used for the control of digital infrastructure and AI systems intended to be used to control emissions and pollution. The EU Council also proposes to extend the prohibition on the use of AI for social scoring that applies to the AI systems placed on the market or put into service “by public authorities or on their behalf” to private actors as well.

Fines and penalties

Both the EU Council and the EP have highlighted that the penalties provided for in the Proposal need to be proportionate so as not to hamper economic competitiveness and innovation amongst smaller players in the industry. One amendment suggested by the ITRE Committee proposes that Article 10 data governance infringements should be subject to a lower tier fine compared to the penalties applicable for non-compliance with prohibited AI practices under Article 5. However, only the JURI Committee have proposed specific changes to the levels of fines making up those tiers. In their joint report, the IMCO and LIBE Committees have also suggested that the EC may additionally adopt a decision imposing on the operator concerned fines up to 2% of the total turnover in the preceding financial year, where the operator intentionally or negligently:

  • Fails to provide information to the EC by the deadline set in an EC decision
  • Fails to rectify by the deadline set in an EC decision, incorrect, incomplete or misleading information given by a member of staff, or fails or refuses to provide complete information, or
  • Refuses to submit to a remote or on-site inspection

Conclusion

The Proposal will likely be subject to further and potentially significant amendment as the EC, the EU Council and the EP work to reach a consensus on a final text. As a result, it is important for stakeholders to keep an eye on its progress as it makes its way through the EU legislative process. Given the current levels of activity, the risk of the Proposal’s progress towards adoption stagnating or stalling currently seems low, and we are likely to see significant further developments before the end of 2022. If that is the case, and the current 24-month transition period provided for in the Proposal remains unchanged, then an EU Artificial Intelligence Act could be fully applicable some time during 2025, or even before the end of 2024. Stakeholders are therefore recommended to:

  • Identify and review the AI systems that they currently use to assess how they may be regulated under the EU AI regulatory regime
  • Consider how existing regulatory functions and compliance systems can be modified to accommodate new AI requirements
  • Where necessary, begin to plan and develop possible new AI compliance strategies and systems using the general framework laid out in the Proposal
  • Closely monitor developments in relation to the Proposal at EU as well as Member State levels in order to maintain visibility on timelines impacting on business strategy and resource allocation

For more information on this and other topics related to artificial intelligence, contact a member of our Product Regulatory team.

The content of this article is provided for information purposes only and does not constitute legal or other advice.



Share this: