Internet Explorer 11 (IE11) is not supported. For the best experience please open using Chrome, Firefox, Safari or MS Edge

On 20 October 2020, the EU Parliament approved an initial draft proposal for the regulation of ethical artificial intelligence (AI). The proposal targets high risk AI in particular, but also sets standards for all AI products and applications. The scope of application of the proposal is very broad in that it covers all uses of AI products in the EU regardless of the origin or place of establishment of the developer or owner of the AI. It also regulates not only the developers of AI, but also the deployers and users of those AI products and applications.

The proposal is in the form of a regulation which means that it should be binding in its entirety and directly applicable in Member States. There are frequent references, however, to rules, guidelines and applications that are to be developed on foot of the regulation.

What will be regulated

The Regulation applies to “artificial intelligence”, “robotics” and “related technologies”, including software, algorithms and data used or produced by such technologies, developed, deployed or used in the Union. Each of artificial intelligence, robotics and related technologies are defined terms and broadly cover: AI software/hardware systems (artificial intelligence), physical machines with AI capability (robotics), and other technologies such as those capable of detecting biometric, genetic or other data (related technologies).

High Risk AI

Article 5 of the Regulation sets the minimum compliance threshold for all AI. AI shall be developed, deployed and used in the Union in accordance with Union law and in full respect of human dignity, autonomy and safety, as well as other fundamental rights set out in the EU Charter of Fundamental Rights.

Articles 6 - 12 and 14 deals specifically with high risk AI which are technologies when their development, deployment or use entails a significant risk to cause injury or harm that can be expected to occur to individuals or society in breach of fundamental rights and safety rules, as laid down in Union law. This is determined following a risk assessment based on objective criteria such as their specific use or purpose, the sector where they are developed, deployed or used and the severity of the possible injury or harm caused.

High risk AI must comply with obligations such as:

  1. A guarantee of full human oversight at any time, including in a manner that allows full human control to be regained when needed, including through the altering or halting of those technologies;

  2. Assurances of compliance with minimum cybersecurity baselines proportionate to identified risk, reliable performance, accuracy, explainability, disclosure of limitations and the provision of a form of kill switch

  3. A lack of bias and that it will not discriminate on grounds such as race, gender, sexual orientation, pregnancy, national minority, ethnicity or social origin, civil or economic status or criminal record

  4. Compliance with relevant Union law, principles and values, in a manner that does not interfere in elections or contribute to the dissemination of disinformation, respects worker’s rights, promotes quality education and digital literacy, does not increase the gender gap by preventing equal opportunities for all and does not disrespect intellectual property rights

  5. Environmental sustainability ensuring that measures are put in place to mitigate and remedy their general impact as regards natural resources, energy consumption, waste production, carbon footprint, climate change emergency and environmental degradation in order to ensure compliance with the applicable Union or national law, as well as any other international environmental commitments the Union has undertaken

  6. Very tight restrictions for any use of biometric data for remote identification purposes in public areas, as biometric or facial recognition.

Redress

Any natural or legal person shall have the right to seek redress for injury or harm caused by the development, deployment and use of high-risk AI, robotics and related technologies, including software, algorithms and data used or produced by such technologies, in breach of Union law and the obligations set out in this Regulation. The scope of this proposed right will no doubt concern all AI product developers and deployers.

Risk Assessment & Supervisory Authorities

The proposal envisages mandatory compliance assessments for high-risk AI and voluntary certificates of ethical compliance for all other AI. The process of certification is to be carried out locally at Member State level by supervisory authorities, which is very similar to the current regulation of data protection under the GDPR. There will be an overarching group of supervisory authorities that will meet at EU level with the Commission to oversee the operation of the certification and monitoring of AI. It is not made clear in the proposal if the mandatory compliance assessments are pre-market launch.

Annex

The Annex to the draft contains specific and exhaustive lists of high-risk AI sectors and high- risk uses or purposes of AI which will always be regulated. The high-risk sectors are;

Employment

Education

Healthcare

Transport

Energy

Public sector (asylum, migration, border controls, judiciary and social security services)

Defense and security

Finance, banking, insurance

The high-risk uses or purposes are:

Recruitment

Grading and assessment of students

Allocation of public funds

Granting loans

Trading, brokering, taxation, etc.

Medical treatments and procedures

Electoral processes and political

campaigns

Public sector decisions that have a significant and direct impact on the rights and obligations of natural or legal persons

Automated driving

Traffic management

Autonomous military systems

Energy production and distribution

Waste management

Emissions control

The draft proposal broadly follows the findings of the Commission’s White Paper on AI earlier this year. However, the scope of the obligations and high-risk sectors and uses is a lot broader than expected. The fact that deployers and users are mostly regulated in the same way as developers is also a change but not an unexpected one, having regard to recent lobbying activities.

This is an early draft of this very important law and it will prove very challenging for those developing and deploying high-risk AI. We can expect significant debate on the drafting before we see a fully formed proposal early next year.

If you would like to discuss any aspect of the EU’s approach to the regulation of AI or any aspect of this article please contact us.


The content of this article is provided for information purposes only and does not constitute legal or other advice.



Share this: