Internet Explorer 11 (IE11) is not supported. For the best experience please open using Chrome, Firefox, Safari or MS Edge

From health apps to software medical devices to 3D printing, AI technology is currently on its way to becoming an integral part of our health system. In the UK, the NHS has already partnered with a number of AI companies and has started to test and utilize their products throughout the healthcare system. The European Commission is keen to promote the adoption of AI by the public sector and are of the view that it is essential that public administrations, hospitals, utility and transport services, financial supervisors, and other areas of public interest rapidly begin to deploy products and services that rely on AI in their activities. The White Paper on AI published in February 2020 notes that a specific focus will be in the areas of healthcare and transport where technology is mature for large-scale deployment.

Not all AI technology used in healthcare will be classified as a software medical device, and subject to the medical devices framework. Whether a particular AI technology is deemed to be a software medical device will require careful consideration under the Medical Devices Directive and from May 2021, under the Medical Devices Regulation.
While this regime seeks to reduce the risks posed by software medical devices as much as possible, there will inevitably be occasions when things go wrong and injury is sustained as a result.
The European Commission published a Report on the safety and liability implications of Artificial Intelligence, the Internet of Things and robotics on 19 February 2020, which assessed the adequacy of existing liability regimes in EU Member States in the wake of emerging digital technologies. The Commission found that while in principle the existing EU and national liability laws are able to cope with emerging technologies, the dimension and combined effect of the challenges of AI could make it more difficult to offer victims compensation in all cases where this would be justified and as such the allocation of the cost when damage occurs may be unfair or inefficient under the
current rules.

Is AI a product or a service?

Whether a software medical device or not, one of the key considerations in establishing which liability (rather than regulatory) framework is going to be applicable is whether the AI technology used is considered a product or a service. This classification will depend very much on:
  • The precise AI that is used
  • The manner in which it is used
  • Whether it is used with or without hardware
If the AI used is ultimately deemed to be a product, then it will be subject to the product liability framework.

EU Product Liability Directive

The EU Product Liability Directive (PLD) relates to “products” which are defined in Article 2 as “..all movables…even those incorporated into another movable or into an immovable.” “Movable” is not defined in the PLD which has led to uncertainty around whether it can apply to intangible things such as software.
If the PLD is applicable, then the AI developers or manufacturers - and in some circumstances, the suppliers/distributors - will be subject to the strict liability regime. Although fault is not a requirement, consumers (patients) must still prove defect, injury and a causal link.

Determining defect

In determining whether the AI technology is defective, the court will consider the level of safety a consumer is entitled to expect. Complex issues arise when determining what the defect is in the AI technology. Identifying the actual failure may be particularly difficult given the potential sources of defect. Sources might include:
  • Fault(s) in the underlying algorithm
  • Corrupted training data
  • Insufficient training data in the algorithm
  • Misuse of the AI machine
  • Hacking of the data or the machine
  • Faulty hardware
  • Clinician error
The Expert Working Group appointed by the European Commission highlighted the specific characteristics of these technologies and their applications – including complexity, modification through updates or self-learning during operation, limited predictability, and vulnerability to cybersecurity threats – may make it more difficult to offer these victims a claim for compensation in all cases where this seems justified. To rectify this, the Group is of the view that certain adjustments need to be made to EU and national liability regimes.

Patient protection and safety

Determining what level of safety a consumer is entitled to expect from AI technologies is going to be very challenging in reality. This is because in many cases, all aspects of its operation are not fully understood by the majority of developers, users or consumers. Furthermore, if the AI is replacing the decision making of a doctor, is the level of safety one is entitled to expect safer than, or equivalent to, that expected from a doctor? Do we need to look at how a reasonable AI machine operates? Are AI machines allowed to “have a bad day”? These are all issues which will ultimately need to be played out in court.
The statutory defence which we are most likely to see relied upon in these claims is the development risks defence. This defence relates to instances in which the risk was not reasonably foreseeable at the time of development of the AI technology and/or that the AI technology was developed in line with the relevant industry standards at the time of development. This, however, does raise the question of whether industry standards can keep up with technology developments to the extent they need to, in order to protect the public.


Manufacturers, as well as repairers, installers and suppliers, etc., may be sued in tort for reasonably foreseeable damage caused to those whom they owe a duty of care. In negligence claims, the reasonableness of the conduct of the AI manufacturer will be under scrutiny rather than the level of safety a consumer/patient is entitled to expect. However, claims based in negligence are not without their difficulties either. Given the pace of change, trying to determine what might be “reasonable” at a particular point in time, may be exceedingly challenging.
In the event of a defect causing injury, the extent to which the manufacturers of the AI technology warned of the risks will likely also be heavily scrutinised. Whether these warnings will ultimately feed into consent processes undertaken by clinicians to patients remains to be seen.
In addition, clinical negligence claims against doctors could arise in the context of their use of an AI product or software medical devices for myriad reasons. For example, consider an instance in which an app employed by a doctor gave an apparently incorrect dosage calculation. Perhaps the doctor should have noticed the error and failed to address it. Perhaps the doctor forgot to implement a critical software update; or maybe the doctor inputted erroneous information into the software which then led to an incorrect calculation. Alternatively, perhaps the doctor was responsible for a clinical error unconnected to the AI or software.
This may have broken the chain of causation arising from the AI software failure, meaning that ultimately it was their actions that caused the harm rather than the AI.
Furthermore, a claim against the hospital by a patient could also potentially arise if they authorised the use of the particular AI technology.


Contractual liability will inevitably arise given the various contractual relationships entered into around the use of AI in healthcare. Contractual provisions will need very careful consideration to ensure that warranties, liability, indemnity and limitation clauses are appropriately in place. For example, these may be between the AI software manufacturer, the hardware manufacturer, the hospital authorising its supply, etc.


Manufacturers and users of AI in healthcare need to be aware of the potential liability frameworks they could be exposed to. Following the publication of the Report on the safety and liability implications of Artificial Intelligence, the Internet of Things and Robotics on 19 February 2020, it remains to be seen what changes, if any, will be made to address potential uncertainties in the existing framework. The Commission has suggested that certain adjustments to the Product Liability Directive and national liability regimes through appropriate EU initiatives could be considered on a targeted, risk-based approach, i.e. taking into account that different AI applications pose different risks.

For more information, please contact a member of our Product Regulation & Consumer Law team.

The content of this article is provided for information purposes only and does not constitute legal or other advice.

Share this: