Internet Explorer 11 (IE11) is not supported. For the best experience please open using Chrome, Firefox, Safari or MS Edge

New laws governing AI are on the horizon that will bring about significant change for businesses relying on AI. Negotiations remain ongoing but it is clear, including from a European Data Protection Supervisor (EDPS)/European Data Protection Board (EDPB) joint opinion last year, that privacy considerations will need to be at the forefront of businesses’ minds.

While we await the conclusion of negotiations and the clarity on what those changes will be, businesses must also ensure they are aware of their current obligations under GDPR when using AI.

Key issues for those currently using AI to consider are:

  • Explaining your processing: being clear with individuals about how and why their data is being processed is a cornerstone of GDPR. Where businesses rely on AI for profiling and automated decision-making, they must explain that this is taking place, as well as the ‘logic’ involved in any such decision-making, including the significance and consequences for the individual. This can be very challenging when the underlying processing relies on highly complex and ever-changing AI tools and various distinct data pools. Businesses need to achieve the right balance between being comprehensive while also remaining understandable for the average individual. Interactive pop-ups and ‘just-in time notices’ can help to provide those with relevant information as and when they need to avoid information overload. Transparency is also an ongoing obligation and so businesses must ensure privacy notices are kept under review and updated when processing changes.
  • Training the AI: algorithms need to be fed data to learn and further develop. While many businesses hold valuable user and customer data already, they must ensure that before using data to improve their AI, they can do so in compliance with GDPR. For example, if a business wants to use its existing customer data to train its AI, it needs to ensure it has a legal basis for the processing, like relying on its legitimate interests. The business must also ensure that it has informed those affected of this intended use. GDPR obligations also apply to any data received from third parties or public sources once it relates to an identifiable individual. While aggregation and deidentification are privacy protective measures, businesses should ensure robust assessments are carried out before concluding aggregated or ‘anonymised’ data actually fall outside the scope of the GDPR.
  • Future uses of data: as algorithms develop and learn, they will inevitably create opportunities for new and additional uses of data. Before businesses start deploying their technology in this way, they should ensure the data can be used in compliance with GDPR. Key issues to consider are whether individuals:
    • Have been informed of this intended use,
    • Would expect it, and
    • Whether there is a valid legal basis for processing. For example, whether any previously obtained consent is sufficiently broad enough to cover the new processing.
  • Fairness and bias: there are inherent risks of bias or discrimination arising from the use of algorithms. To comply with GDPR, steps must be taken to ensure data is not used in a way that is unjustifiably detrimental, discriminatory, or in a way that is misleading or unexpected. This means ensuring any algorithms relied on are reviewed regularly for any bias against certain categories of individuals and mitigation measures are put in place, such as human oversight and intervention.
  • Automated decision-making: businesses looking to rely on AI to automate important decisions must consider whether specific additional rules or restrictions provided for by GDPR for these types of activities apply. GDPR sets a relatively high bar for these restrictions and so they only arise where decisions:
    • Are solely based on automated processing, i.e. there is meaningful human involvement, or
    • Have a legal or ‘significantly similar’ effect.

Decisions impacting on an individual’s legal status, contractual rights, financial circumstances, or access to essential services will fall within the scope. On the other hand, decisions to target individuals with certain ads, or to tailor a service to their interests, in most cases won’t. Where they apply, the most important of these restrictions is that decision-making can only take place with the individual’s consent, where it is necessary to perform a contract with them, or to comply with a legal obligation. In addition, businesses must implement suitable measures to safeguard the individual’s rights, freedoms, legitimate interests, and their right to obtain human intervention, to express his or her point of view, and to contest the decision.

While it may be some time before the new AI laws apply in the EU, businesses need to be fully aware of existing obligations and should take steps to ensure compliance.

For more information, contact a member of our Technology team.

The content of this article is provided for information purposes only and does not constitute legal or other advice.



Share this: