Internet Explorer 11 (IE11) is not supported. For the best experience please open using Chrome, Firefox, Safari or MS Edge

The European Commission has recently advanced its program for the regulation of artificial intelligence, and healthcare is expected to be one of the core sectors affected by the Commission’s proposals. On 19 February 2020 the Commission published a White Paper On Artificial Intelligence - A European approach to excellence and trust which sets out its key proposals for AI specific regulation. We will take a look at how the proposals are expected to have significant impacts on designers, creators, manufacturers and suppliers of AI in the healthcare sector.

What will be regulated?

“High risk” AI applications are being targeted by the proposals. AI is not specifically defined in the White Paper but is referred to in very broad terms. Any product (whether hardware or software) which includes any element of machine/deep learning, neural networks, computer vision etc. will be subject to the proposals. This will no doubt bring many digital health products within scope.

An exhaustive and specific list of high risk sectors will be regulated and that includes healthcare. Within the healthcare sector only those high risk AI applications will be targeted (eg medical devices and diagnoses applications). The proposed rules will not apply to every use of AI in the healthcare sector. For example, a flaw in the appointment scheduling system in a hospital will normally not pose risks of such significance as to justify legislative intervention. The assessment of the level of risk of a given use could be based on the impact on the affected parties. For instance, uses of AI applications that produce legal or similarly significant effects for the rights of an individual or a company; that pose risk of injury, death or significant material or immaterial damage; that produce effects that cannot reasonably be avoided by individuals or legal entities.

The threshold criteria of what constitutes “high risk” are already causing concern with many AI companies with many calling for the threshold to be specified in detail.

How will this AI be regulated?

The detail of the proposals is contained in 6 requirements that high risk AI must comply with:

  1. Data - use only data sets that are broad and representative to avoid discrimination
  2. Records - maintain accurate records of training and test data sets (including a copy)
  3. Transparency – provide clear information regarding system limitations and capabilities and let users know they are interacting with AI and not a human
  4. Robustness and accuracy – reproducible outcomes, deal with errors or inconsistencies, resilient against attacks
  5. Provide for human intervention - before a negative result (refusing social security benefits), after any result subject to check, Intervene in real time, or by imposing operation constraints at design phase
  6. Remote biometric identification – surveillance is highly restricted

How would this work in practice

The above 6 requirements could have far reaching consequences for how AI in the digital health sector is developed in the future.

Consider a machine learning tool which reviews x-rays for lesions for reporting positive or negative cancer diagnoses. Under the proposals creators of this technology will need to give consideration to the scope of their training and test data sets and satisfy themselves that they are sufficiently broad and cover all relevant scenarios needed to avoid dangerous situations. No doubt this is already being considered by many in this space but this proposal will create a pre-market compliance check with regard to being able to demonstrate this. There may also be an obligation on the creator to keep a copy of the training and test sets and documentation on the programming and training methodologies, processes and techniques used to build, test and validate the machine learning tool.

The proposal that these requirements be checked as part of a pre-market conformity test means that AI designers in this space will need to be educated on and design these requirements into their product development procedures. This is a significant change which will introduce design constraints and procedural changes to existing processes. The requirement to maintain documentation on the programming and training methodologies, processes and techniques used to build, test and validate the machine learning tool and to disclose that will also give rise to concern IP/trade secret owners.

All of the above proposals will be in addition to and not in substitution for compliance with the Medical Device Directive and other existing requirements for digital health technologies.

Conclusion

The EU sees AI as a strategic sector – a target for both funding and “values” focused regulation. There is an existing body of EU law that already regulates AI (GDPR, product regulation) and now we have new proposed rules. All AI companies need to sit up and take notice of these proposals because they will apply to AI products and services supplied to the EU regardless of the establishment of the producer/supplier. Bear in mind that, even after Brexit, the EU consumer population will be 450M approx. It is difficult to ignore what the EU is doing here.


The content of this article is provided for information purposes only and does not constitute legal or other advice.



Share this: