Latest

Insights

Artificial Intelligence Update: Liability Arising from the Use of AI in Healthcare

21 May 2019

From health apps to software medical devices to 3D printing, AI technology is currently on its way to becoming an integral part of our health system. In the UK, the NHS has already partnered with a number of AI companies and has started to test and utilise their products throughout the healthcare system.

Not all AI technology used in healthcare will be classified as a software medical device, and subject to the medical devices framework. Whether a particular AI technology is deemed to be a software medical device will require careful consideration under the Medical Devices Directive[1] and from May 2020, under the Medical Devices Regulation[2]. While this regime seeks to reduce the risks posed by software medical devices as much as possible, there will inevitably be occasions when things go wrong and injury is sustained as a result.

Is AI a product or a service?

Whether a software medical device or not, one of the key considerations in establishing which liability (rather than regulatory) framework is going to be applicable is whether the AI technology used is considered a product or a service. This classification will depend very much on:

  • The precise AI that is used
  • The manner in which it is used, and
  • Whether it is used with or without hardware

If the AI used is ultimately deemed to be a product, then it will be subject to the product liability framework.

EU Product Liability Directive

The EU Product Liability Directive[3] (PLD) relates to “products” which are defined in Article 2 as “..all movables…even though incorporated into another movable or into an immovable.” “Movable” is not defined in the PLD which has led to uncertainty around whether it can apply to intangible things such as software[4].

If the PLD is applicable, then the AI developers or manufacturers - and in some circumstances, the suppliers/distributors - will be subject to the strict liability regime. Although fault is not a requirement, consumers (patients) must still prove defect, injury and a causal link.

Determining defect

In determining whether the AI technology is defective, the court will consider the level of safety a consumer is entitled to expect. Complex issues arise when determining what the defect is in the AI technology. Identifying the actual failure may be particularly difficult given the potential sources of defect. Sources might include:

  • Fault(s) in the underlying algorithm
  • Corrupted training data
  • Insufficient training data in the algorithm
  • Misuse of the AI machine
  • Hacking of the data or the machine
  • Faulty hardware
  • Clinician error

Patient protection and safety

Determining what level of safety a consumer is entitled to expect from AI technologies is going to be very challenging in reality. This is because in many cases, all aspects of its operation are not fully understood by the majority of developers, users or consumers. Further, if the AI is replacing the decision making of a doctor, is the level of safety one is entitled to expect safer than, or equivalent to, that expected from a doctor? Do we need to look at how a reasonable AI machine operates? Are AI machines allowed to “have a bad day”? These are all issues which will ultimately need to be played out in court.

The statutory defence which we are most likely to see relied upon in these claims is the development risks defence. This defence relates to instances in which the risk was not reasonably foreseeable at the time of development of the AI technology and/or that the AI technology was developed in line with the relevant industry standards at the time of development. This, however, does raise the question of whether industry standards can keep up with technology developments to the extent they need to, in order to protect the public.

Negligence

Manufacturers, as well as repairers, installers and suppliers, etc, may be sued in tort for reasonably foreseeable damage caused to those to whom they owe a duty of care. In negligence claims, the reasonableness of the conduct of the AI manufacturer will be under scrutiny rather than the level of safety a consumer/patient is entitled to expect.However, claims based in negligence are not without their difficulties either. Given the pace of change, trying to determine what might be “reasonable” at a particular point in time, may be exceedingly challenging.

In the event of a defect causing injury, the extent to which the manufacturers of the AI technology warned of the risks will likely also be heavily scrutinised. Whether these warnings will ultimately feed into consent processes undertaken by clinicians to patients remains to be seen.

In addition, clinical negligence claims against doctors could arise in the context of their use of an AI product or software medical devices for myriad reasons. For example, consider an instance in which an app employed by a doctor gave an apparently incorrect dosage calculation. Perhaps the doctor should have noticed the error and failed to address it. Perhaps the doctor forgot to implement a critical software update; or maybe the doctor inputted erroneous information into the software which then led to an incorrect calculation. Alternatively, perhaps the doctor was responsible for a clinical error unconnected to the AI or software. This may have broken the chain of causation arising from the AI software failure, meaning that ultimately it was their actions that caused the harm rather than the AI.

Furthermore, a claim against the hospital by a patient could also potentially arise if they authorised the use of the particular AI technology.

Contract

Contractual liability will inevitably arise given the various contractual relationships entered into around the use of AI in healthcare. Contractual provisions will need very careful consideration to ensure that warranties, liability, indemnity and limitation clauses are appropriately in place. For example, these may be between the AI software manufacturer, the hardware manufacturer, the hospital authorising its supply, etc.

Conclusion

Manufacturers and users of AI in healthcare need to be aware of the potential liability frameworks they could be exposed to. Whether the existing product liability framework remains fit for purpose in its application to AI remains very much under review.

For more information on the application of AI in the healthcare sector, contact a member of our Life Sciences or Technology teams.


The content of this article is provided for information purposes only and does not constitute legal or other advice.  


[1] Directive 93/42/EEC

[2] Regulation EU 2017/745

[3] Directive 85/374/EEC

[4] This is an issue the European Commission’s Expert Group on Liability New Technologies are currently considering and it is hoped that clarity in the form of guidelines will be produced in due course.

Discuss your related queries now with Michaela Herron.

Michaela_Herron_Portrait_Apr_2019_WEB_153_x_230.jpg

Related Expertise

Life Sciences
Technology Law
  • LinkedIn