AI Liability
From the AI Liability Directive to the current framework

The EU has dropped its proposed AI Liability Directive, raising major questions for providers, developers and deployers of AI. Our Artificial Intelligence team explores why it was shelved, what this means for liability in the AI sector and what might happen next.
After much contention and lengthy delays, the European Commission has put the AI Liability Directive (AILD) on the chopping block.
On 11 February, the Commission announced the withdrawal of the AILD from its 2025 work programme, citing “no foreseeable agreement” and stating that it will “assess whether another proposal should be tabled or another type of approach should be chosen”.
In this article, we outline the story of the AILD, from how it started, to how we got here and where we go to from here.
Background
The AILD was proposed in September 2022. While certain existing legislation covers some of the risks of AI (such as GDPR for use of data, and the Product Liability Directive), there was a concern that those laws did not adequately address liability for AI-specific issues. The introduction of a specific AI Liability Directive was designed to address such gaps, and provide clear guidance and a single framework.
In particular, it was designed to revise ‘traditional’ fault-based liability rules by harmonising certain procedural aspects of AI liability across EU Member States. The goal was to make it easier for claimants to bring claims by harmonising aspects of the notoriously fragmented Member State rules. This was proposed to be done by, for example, injecting the same terms and definitions used in the AI Act, as well as similar tools to the Product Liability Directive (PLD), like disclosure of evidence and rebuttable presumptions, into national fault-based rules.
Key features of the AILD
- Scope: The scope was broad and allowed claims to be brought against any person for fault that influenced the AI system, which caused the damage, not just the manufacturer.
- Damage: The AILD applied to any type of damage covered under national law, including resulting from discrimination or breach of fundamental rights like privacy, which could in some cases go even broader than the expanded concept of damage under the Revised PLD.
- Disclosure of evidence: The courts would have the power, upon request of a (potential) claimant, to order providers of AI systems to disclose or preserve relevant evidence at their disposal about a specific AI system that is suspected of having caused damage.
- Rebuttable presumptions: The AILD had a presumption of fault where the defendant fails to comply with an order by a court to disclose or preserve evidence.
There was also a presumption of causal link between the fault of the defendant and the output produced by the AI system where certain conditions are met.
Criticism
The AILD as proposed proved highly contentious. Those against the proposal argued that existing frameworks are sufficient. In particular, they argue that the vast range of ex ante EU law sufficiently protects and ensures the safety of AI on the EU market. There was a concern that another law would add additional complexity, and it would be an overreach to add another piece of legislation to the mix.
An assessment by the European Parliament’s research unit published in September 2024 recommended a different approach. It suggested replacing the PLD and AILD with a revised product liability regulation and an AI liability regulation, reframed as a software regulation. The assessment is part of a wider, long-running narrative around the lack of clarity on the added value of the AILD, and its overlap with the PLD.
Why was the AILD dropped?
The delay in progress can be attributed to a tacit agreement between co-legislators to put the AILD on hold while they worked on the AI Act. The intent was to ensure full consistency with the AI Act and product safety legislation.
However, the decision to drop the AILD was, ultimately, sudden and mostly political. The move follows pressure from EU digital chief Henna Virkkunen and was part of a more general impetus to reduce and simplify the AI regulatory framework. When work resumed on the AILD once the AI Act text was finalised, legislators faced pushback. A French-led coalition of countries questioned whether the AILD was needed and viewed it as overly complex. In addition, internal opposition within the European institutions created significant hurdles to progress on the Directive. Ultimately, the Commission’s motivation appears to have been largely maintaining economic competitiveness, particularly in light of the Draghi report and the ambition to ensure the EU remains competitive. No doubt its drive to simplify technology laws plays into this decision also.
Current status
Recently the International Association of Privacy Professionals (IAPP) held its AI Governance Global Europe 2025 conference in Dublin. Kai Zenner, Head of Office, Digital Policy Adviser for Axel Voss, Member of the European Parliament, led the session on AI liability. He is an advocate for the AILD and wants to see it back on the EU legislative books. While he is confident this is possible from a technical legislative perspective, he is less so when taking into account the current political climate and the low appetite of the Commission to push for this law.
Current framework
In the absence of AI-specific liability rules, certain challenges, especially for claimants, are likely to remain in the AI sector. In particular, claimants must rely on the existing legal frameworks in place for non-contractual liability, which are diverse across the EU. For example:
- Duty of care - in the context of AI where there are various actors, it may be difficult to establish which party owes the other party a duty of care.
Agentic AI and autonomy - it may be difficult to establish at what point a user is at fault versus the provider/deployer - Causation - given the long and complex value chain, particularly in the case of multi-agent systems, it may be difficult to establish causation.
- Open source - the liability of open-source providers is a complex and open point
- Unpredictability - the unpredictability of AI means that it may be difficult to establish that the harm was reasonably foreseeable
- Harms - it may be difficult to measure and substantiate that the harms derive specifically from the AI
It is suggested that in the absence of a single legal framework for establishing fault-based claims, it is possible that market participants will need to rely on cyber insurance. In addition, it is likely that liability will continue to be pushed downstream, thus prejudicing smaller companies. Unless and until there is an about turn on policy here, market participants will need to carefully consider their potential liability and how this can be managed effectively in contracts with service providers and end users.
What happens next?
Dropping the AILD will broadly be welcomed as a positive for AI providers. Without it, it could be more challenging for claimants to bring actions in damages against AI providers because they will not, for example, be able to rely on the pro-claimant presumptions. A potential downside is the AILD was supposed to bring some uniformity to the process for making fault-based claims in the EU for actions in damages against AI providers. The patchwork of Member State laws is notoriously challenging to navigate. That continued lack of uniformity may also impact the defense side, but on balance it being shelved is better for providers.
A softening of EU regulation / enforcement?
Finally, with one less AI law, is this a sign of the EU evolving its position on regulation and enforcement? It is difficult to make any definitive calls at this early stage. It is possible to read that narrative into Ursula von der Leyen’s speech at the AI Action Summit where she said: “And safety is in the interest of business. At the same time, I know, we have to make it easier, we have to cut red tape. And we will.”
Only time will tell though.
Key takeaway
Given that there’s no likely imminent legislative development in this area, market participants should carefully consider their potential liability, whether it is in the context of their own use of AI or the sale and distribution of AI and how they can manage and mitigate risk through their contracts with suppliers, service providers, customers, and end users. For providers of AI models and systems, clear transparency will also be key. For deployers, monitoring the deployment of AI will be essential.
For more information, contact a member of our Artificial Intelligence team.
The content of this article is provided for information purposes only and does not constitute legal or other advice.
Share this:
