Internet Explorer 11 (IE11) is not supported. For the best experience please open using Chrome, Firefox, Safari or MS Edge

On 20 October 2020 the EU Parliament approved an initial draft proposal for the regulation of ethical artificial intelligence (AI). The proposal seeks to build on Europe’s existing privacy framework namely the General Data Protection Regulation (GDPR).

How does the GDPR currently apply to AI?

The GDPR imposes strict rules governing the use of personal data, and much like the proposed AI regulations, has extra-territorial scope. Of particular note, absent certain pre-conditions being met. Article 22 of the GDPR prohibits individuals from being subject to a decision based solely on automated processing. This includes profiling, which produces significant effects and may restrict certain AI applications. For example, a bank may run into challenges if it sought to use AI to decide who to give loans to, without any meaningful human review built into the process.

What is High Risk AI and why does it matter?

The EU proposes to take a risk based approach to regulating AI, where “high risk” AI will be subject to stricter scrutiny. This is quite different to how the GDPR operates, as, with some limited exceptions, the GDPR applies broadly to all personal data processing, regardless of the level of risk.

According to the proposals an AI is “high risk” where there is a significant risk the AI could cause injury or harm to individuals or society in breach of fundamental rights and safety rules as laid down in EU law. The AI’s specific use or purpose will be considered, as well as the sector where it is developed, deployed or used, and the severity of injury or harm that can be expected to occur.

In Article 4 of the proposed regulation, “loss of privacy” falls under “injury or harm”. Currently it is unclear what “loss of privacy” entails, and under the GDPR this is not recognised as harmful.

However, the Annex to the draft proposal contains a list of high risk uses of AI, which provides some clarity. Some examples of high risk uses listed in the Annex are: recruitment, allocation of public funds, granting loans, automated driving, and medical treatments and procedure.

Access to data

In its European Strategy for Data, published in February 2020, the European Commission noted that large amounts of data, personal and non-personal, may be held by a limited number of companies. The Commission expressed the view that it may be appropriate to require that companies be required to share access to this data, particularly so as to unlock the benefits of AI.

This immediately throws up other privacy issues and demonstrates the tensions in the EU’s current strategy around AI. On the one hand, the EU is seeking to augment individuals’ rights in relation to AI. On the other hand, it is also considering mandating the sharing of such individuals’ data, between companies, so as to enhance competition.

The EU is seeking to carve out its own niche with regard to the use of AI. However, it remains to be seen if the proposed rules will make “European AI” synonymous with quality and safety or whether it will simply hamper European companies, leaving Europe reliant on AI solutions developed in the US and China.

It will also be interesting to see how the inherent tensions in the EU’s approach to personal data play out. Data protection and competition law are key pillars of the European legal order. However, they potentially conflict when it comes to the ability of one company to access personal data held by another.


The content of this article is provided for information purposes only and does not constitute legal or other advice.



Share this: