Internet Explorer 11 (IE11) is not supported. For the best experience please open using Chrome, Firefox, Safari or MS Edge

Special Category Data and Bias Monitoring Under the New EU AI Act

The EU wants to regulate the training of AI models by imposing quality criteria on training, validation and testing data sets. To achieve this and to ensure bias monitoring, detection and correction for high-risk AI systems, AI creators will be permitted to process special category data subject to specified obligations. Our Artificial Intelligence team examines why the legal basis for doing so under the GDPR remains unclear.


The EU wants to regulate the training of AI models by imposing quality criteria on training, validation and testing data sets. To achieve this and to ensure bias monitoring, detection and correction for high-risk AI systems, AI creators will be permitted to process special category data subject to specified obligations. However, the legal basis for doing so under the GDPR remains unclear.

Article 10(5) of the draft Artificial Intelligence Act (the AI Act) contemplates using special category data, or SCD, to ensure bias monitoring, detection and correction for high-risk AI systems. The use of SCD for these purposes is limited to it being strictly necessary for those purposes and subject to appropriate safeguards. The GDPR legal basis to be relied on for this purpose remains unclear. As a result, greater clarity on the issue from the finalised text of the AI Act or regulatory guidance is widely anticipated by relevant stakeholders.

Eliminating bias and discrimination

Regulators emphasise the importance of minimising and eliminating bias and discrimination from AI systems. However, it is not clear which legal basis under the GDPR AI system providers can rely on for the processing intended to ensure bias monitoring can take place. As such, a more coherent regulatory approach is needed to clarify which legal basis is appropriate for the processing of SCD.

Can “public interest” be relied on as a legal basis under GDPR?

“Public interest” as a legal basis must be considered separately under Article 6 and Article 9 GDPR. Additionally, where SCD is processed there must be a lawful basis under Articles 6 and 9. The elimination of bias and discrimination from AI systems may be seen as a “public interest”. Therefore, the question arises as to whether Article 6(1)(e) GDPR could be relied on, for instance. Article 6(1)(e) GDPR states that such processing shall be lawful where it is necessary for the performance of a task carried out in the public interest, or in the exercise of official authority vested in the controller.

The UK ICO’s AI Guidance states that the processing of personal data may be necessary as part of the exercise of official authority, or to perform a task in the public interest set out by law. However, the ICO explicitly states that this is likely to be only relevant to public authorities using AI to deliver public services. The Spanish Data Protection Agency, in their guidance ‘GDPR compliance of processing that embeds Artificial Intelligence’ states that, from the point of view of AI-based solutions, a private entity may not claim the public interest exemption for the processing of personal data – unless it is laid down in law. Therefore, it seems unlikely that Article 6(1)(e) can be relied on.

Under Article 9(2)(g) GDPR, the processing of SCD shall be permitted where the processing is necessary for reasons of “substantial public interest”. The processing carried out must be:

  • On the basis of Union or Member State law
  • Be proportionate to the aim pursued
  • Respect the essence of the right to data protection, and
  • Provide for suitable and specific measures to safeguard the fundamental rights and the interests of the data subject

The right must be itself enshrined in the law. Recital 46 GDPR provides, as an example, "processing that is necessary for humanitarian purposes, including for monitoring epidemics and their spread or in situations of humanitarian emergencies, in particular in situations of natural and man-made disasters". If countering bias in high-risk AI systems was to qualify as a “substantial public interest”, then the AI Act would need to provide for suitable measures to safeguard data subject’s fundamental rights when this processing takes place.

Conclusion

The AI Act contemplates a legal processing ground for SCD for bias monitoring, detection and correction for high-risk AI systems in limited circumstances. In its current draft form, there does not appear to be strong grounds for relying on the public interest lawful basis to process SCD unless:

  • It is laid down in law
  • It is strictly necessary, and
  • There are appropriate safeguards in place to protect fundamental rights

Further clarification from lawmakers would therefore be welcome to determine what legal basis under GDPR AI providers can rely upon for such bias monitoring.

For more information on the use and regulation of AI in high-risk systems, contact a member of our Artificial Intelligence team.

The content of this article is provided for information purposes only and does not constitute legal or other advice.



Share this: