Internet Explorer 11 (IE11) is not supported. For the best experience please open using Chrome, Firefox, Safari or MS Edge

Contracting for AI Used in the Digital Health Industry

There are many sources of risk and liability when using AI. These can range from intellectual property rights, GDPR, the impending AI Regulation, sector specific regulatory requirements and even equality legislation. Contracting for AI, however, is about voluntary risk allocation and the approach taken to this can vary widely. As contracting for AI is at a relatively early stage there is a degree of uncertainty on how to approach it. This is particularly the case when it comes to dealing with liability issues.

Put simply, AI is software performing tasks typically associated with intelligence and acting with a degree of autonomy. There is, therefore, always an element of unpredictability in the outcome and output. Dealing with this unpredictability and understanding how to minimise it is a key issue when considering liability allocation in a contract for the provision of AI driven services.

Some suppliers may seek to take the approach that due to the potential unpredictability of outcomes, they will not assume any liability arising from the AI. For the vast majority of customers this will simply not be good enough and will not be a basis on which they can contract.

It is therefore in both the supplier’s and the customer’s interests to a take considered, informed and reasonable approach to liability allocation in an AI contract. The question then is, how do you do this?

Pre-contractual due diligence

Customer parties should undertake due diligence on the digital health technology itself, in addition to their normal pre-contract due diligence on the supplier. The customer’s aim is to have a clear understanding of what the AI system can deliver. In our experience, this is an iterative process.

The provider should have a set of standard documents which set out the specifics of the AI offering. This material can sometimes be hard for a non-expert to understand. The customer therefore needs to engage with the supplier in order to get down to the specifics on what the AI can deliver and how it achieves those deliverables.

Transparency and accountability

Transparency in this complex area of digital health technology is imperative for both parties. There can be a tendency to shy away from the technical complexity but the only way the customer will be able to establish what elements of the system are within the supplier’s control, and what elements are not, will be to open the “black box” and get a clear view of how the system works. The elements which are in the supplier’s control should be set out in the contract as things which the supplier is responsible for.

This transparency issue is of particular importance when it comes to responsibility for, and therefore liability for, outputs. How are outputs arrived at and what element of that is in the supplier’s control? Suppliers may seek to avoid including any commitments on the outputs the AI will deliver. However, a customer who has a clear view of how those outputs are arrived at will be able to address such an approach in an informed manner. A well-advised supplier should be able to explain and demonstrate the design and process used by the AI from the outset. This will aid customer engagement and help to build trust in the system.

Once transparency is achieved, the next linked step is accountability. The parties should consider:

  • Does the contract clearly specify the purpose of the AI?
  • Is there an understandable specification and commitment on what will be delivered, including any key performance criteria to be applied?
  • Does the contract clearly set out what is within the supplier’s control and therefore what the supplier is responsible for?
  • Are there any customer dependencies and are these specified?
  • Where have the datasets used to train the AI come from, who is responsible for them and what happens if the datasets are the root of a problem later?
  • If the product is a medical device, has it undergone the requisite conformity assessment and is it validly CE marked?
  • What intellectual property has gone into the development of the system and is the supplier standing over its right to use that intellectual
    property, usually in the form of an indemnity?
  • Could any intellectual property be developed through the use of the AI? If so, who owns or has a licence to use that?
  • How does the AI adapt and learn and how could this lead to a variance in outputs?
  • What upgrades, updates and maintenance will the supplier be responsible for during the term of the contract? Will a failure to properly meet
    these obligations result in bad outcomes?
  • What third parties are involved in the process, and who is responsible if something goes wrong due to those third parties?

This granular approach will often be the key to understanding responsibilities and therefore allocating liability.

Testing, reporting and audit

Testing of the AI at the start of the contract and on an ongoing basis will be critical. Such on-going testing will serve to catch a problem early on and will thereby avoid a systemic issue developing. It will also give the customer visibility and will enable the customer to hold the supplier accountable, where
necessary.

Customers should ensure that suppliers are required to maintain records on the operation and testing of the AI and to make these records available on request. These reporting obligations should be backed up by audit rights for the customers. The aim of reporting and audit is to allow the customer to seek to manage its own risk arising from AI by being informed on a consistent basis, while also providing an accountable audit trail should an issue arise.

When dealing with AI systems that are deployed in or as part of devices regulated under the Medical Device Regulation (EU) 2017/745 or the In Vitro Diagnostic Medical Devices Regulation (EU) 2017/746, stringent supplier controls provided for as part of quality management system requirements under those pieces of legislation must also be accounted for. Even in cases where a general health or wellness product utilises AI tools but is not regulated as a medical device or an in-vitro diagnostic device, requirements around record-keeping and audit should still be carefully considered.

Your liability clause

As with any IT contract, suppliers will seek to limit their liability via a limitation and exclusion of liability clause. This will be no different for AI contracts. The normal rules will apply, namely that the strength of the parties’ respective negotiating positions will be the key determinative factor in the drafting of any liability clause. The issue for customers is to ensure that the supplier is not seeking to exclude liability for matters which are actually in the supplier’s control, either in whole or in part, and which the supplier should therefore be held accountable for.

The unique aspect of contracting for AI is dealing with liability arising from outcomes which are truly only in the control of the AI itself. However, in our experience, when the customer undertakes proper due diligence and achieves transparency this, in many cases, turns out not to be a significant issue.

For more information, please contact a member of our Life Sciences team.

The content of this article is provided for information purposes only and does not constitute legal or other advice.



Share this: