Internet Explorer 11 (IE11) is not supported. For the best experience please open using Chrome, Firefox, Safari or MS Edge

As AI adoption continues to accelerate across the EU, civil liability rules relating to damage that is traceable back to AI systems remain underdeveloped and unclear. A proposal for an EU Artificial Intelligence Liability Directive (the Proposal) therefore aims to harmonise certain aspects of fault-based EU civil liability frameworks as they apply to AI. In this short overview, we explain how the Proposal fits with other existing and planned EU legislation aimed at regulating AI and highlight its most notable features.

Background

As stated in the text of the Proposal, current national liability rules are ill-equipped to handle cases involving AI-enabled products and services. Hallmark characteristics of AI systems like opacity, complexity and autonomy can make it particularly difficult and expensive for claimants to establish who to sue and then prove how that liable person is to blame for the damage they have suffered. In response, national courts in EU Member States need to adapt how they apply existing civil liability rules in order to achieve a just result in certain cases involving AI. Several EU Member States are already pursuing their own AI civil liability strategies. Without EU-level legislation there is a risk of fragmentation, with different rules and procedures for AI cases in different Member States. This has the potential to result in increasing levels of legal uncertainty for businesses which could in turn lead to increased costs, especially for SMEs trading across borders with limited access to in-house legal and technical expertise.

A developing EU AI framework

The Proposal forms part of a wider suite of EU legislation designed to regulate AI:

  • The proposal for an AI Act: a horizontal piece of safety legislation that would apply to AI systems. The AI Act sets out various safety requirements for AI systems using a risk-based classification system. The majority of these requirements such as those relating to transparency, oversight, cybersecurity and data governance are addressed to so called ‘high-risk systems’, eg AI systems that are intended to be used as a safety component of a product or are themselves products regulated under EU legislation, covering a list of specific sectors including toys, radio equipment and medical devices. It would also restrict the use of AI for certain purposes such as biometric scanning and social scoring
  • The proposal for a revised Product Liability Directive (PLD): updated legislation designed to modernise the existing no-fault strict liability regime for products. The revised PLD would expand the definition of ‘product’ to include software and extend the criteria for assessing ‘defectiveness’ to include a product’s ability to learn after deployment. This proposed legislation would also introduce rebuttable presumptions where a claimant faced ‘excessive difficulties’ in proving defectiveness and/or causation of damage with the need to explain the inner workings of an AI system referred to as a specific example where such presumptions may trigger.

The Proposal complements both of these pieces of legislation by providing targeted and proportionate measures designed to ease a claimant’s burden of proof in fault-based claims involving AI systems, (as opposed to no-fault claims under the PLD.

Key features of the Proposal

The Proposal does not seek to alter well established concepts forming part of existing national civil liability systems such as ‘fault’ or ‘damage’. Instead, it seeks to address the burden-of-proof issue in a way that interferes as little as possible with different national liability regimes. It proposes to do this using two legal tools:

  • Access to evidence: those seeking compensation would have an opportunity to obtain information on high-risk systems that must be recorded and documented under the AI Act. These requests would need to be “supported by facts and evidence sufficient to establish the plausibility of the contemplated claim for damages”. The requested evidence would also need to be at the addressee’s disposal. This measure would be open to ‘potential claimants’ who could request a court to order the disclosure of relevant evidence in advance of submitting a claim for damages. The recitals to the Proposal make clear that such orders for disclosure should be proportionate, such that only the relevant parts of records needed to prove non-compliance should be disclosable, and the legitimate interests of all parties, as they relate to trade secrets or confidential information, should remain protected
  • Rebuttable presumption of causation: the Proposal also makes provision for a presumption of causal link in the case of fault, which can trigger if a number of criteria are satisfied. Firstly, the claimant needs to demonstrate a fault on the part of the defendant. This can be an instance of non-compliance with a duty of care laid down in EU or national law. In the case of high-risk systems, non-compliance with the requirements of the AI Act would constitute such a fault. Secondly, the claimant would need to show that it was ‘reasonably likely’ that the fault had influenced the AI-system output in question, or lack thereof. Thirdly, the claimant would still need to demonstrate that the output, or lack of an output, caused the damage complained of. The presumption also distinguishes between claims brought against providers and users of high-risk systems, and defendants may prevent the presumption from triggering in cases involving high-risk systems where they could demonstrate that the evidence and expertise needed for the claimant to prove a causal link is already available.

Conclusion

The Proposal will now undergo review by the European Parliament and the Council as part of the EU Ordinary Legislative Procedure. During that process, stakeholders are encouraged to assess how any revisions or amendments incorporated into the Proposal would impact on their product portfolios were they to become law, and what steps would need to be taken as a result. Once adopted, the AI Liability Directive will also need to be transposed into national law. Companies and individuals that stand to be affected should prepare to contribute to any public consultations that are established as part of that process.

For more information, please contact a member of Product Regulatory and Liability team.



Share this: