Internet Explorer 11 (IE11) is not supported. For the best experience please open using Chrome, Firefox, Safari or MS Edge

The European Commission’s Report on AI (the Report) aims to identify and examine the broader implications for and potential gaps in the liability and safety frameworks for AI, the IoT and robotics. The recommendations in this report are highly relevant for the digital health sector when one considers the upward trend in the use of telemedicine, health apps for diagnosis and monitoring of illnesses and robotic surgery. The liability section of the Report builds on the evaluation of the Product Liability Directive, the input of the relevant experts groups and contacts with stakeholders. The Expert Group on Liability and New Technologies (findings discussed in the following article) was created to provide the Commission with expertise on the applicability of the Product Liability Directive and national civil liability rules, and with assistance in developing guiding principles for possible adaptations of applicable laws related to new technologies.

Below are some of the key recommendations discussed in the Report:

  1. Connectivity is a core feature of AI devices including, in particular, consumer and healthcare devices, apps and software. The safety concept in the current EU product safety legislation is in line with an extended concept of safety in order to protect consumers and users. Thus, the concept of product safety encompasses protection against all kinds of risks arising from the product, including not only mechanical, chemical and electrical risks, but also cyber risks and risks related to the loss of connectivity of devices. However, the Commission notes that explicit provisions in this respect could be considered within the scope of sector specific EU pieces of legislation, in order to provide better protection of users and more legal certainty.
  2. The Commission is of the view that unintended outcomes of autonomous AI features of products could cause harm to users and exposed persons. Besides the risk assessment performed before placing a product on the market, a new risk assessment procedure may be required to be put in place where the product is subject to important changes during its lifetime, e.g. different product function, not foreseen by the manufacturer in the initial risk assessment. This could certainly affect sophisticated healthcare devices with embedded AI and also possibly other consumer products. This new risk assessment should focus on the safety impact caused by the autonomous behaviour throughout the product lifetime. The risk assessment should be performed by the appropriate economic operator. In addition, the relevant EU pieces of legislation which are sector specific could include reinforced requirements for manufacturers on instructions and warnings for users.
  3. The Commission notes that the existing EU product safety legislation explicitly addresses human oversight in the context of AI self-learning products and systems. The relevant EU pieces of legislation may foresee specific requirements for human oversight, as a safeguard, from the product design and throughout the lifecycle of the AI products and systems.
  4. Autonomous AI features in products also raise questions about human control. The relevant EU pieces of legislation may foresee specific requirements for human oversight, as a safeguard, from the product design and throughout the lifecycle of the AI products and systems.
  5. The Commission has concerns about the future “behaviour” of AI applications, particularly in respect of mental health risks in the healthcare sector. Explicit obligations for producers of, among others, AI humanoid robots to explicitly consider the immaterial harm their products could cause to users, in particular vulnerable users such as elderly persons in care environments, could be considered within the scope of relevant EU legislation.
  6. Data dependency and data quality are concerns for the Commission also. The question arises if EU product safety legislation should contain specific requirements addressing the AIrisks to safety of faulty data at the design stage, as well as mechanisms to ensure that quality of data is maintained throughout the use of the AI products and systems. Such a broad ranging change to the law would arguably affect all types of AI products, both hardware and software, that rely on machine learning and other reinforced learning techniques.
  7. Existing product safety legislation does not explicitly address the increasing risks derived from the opacity of systems based on algorithms. It is therefore necessary to consider requirements for transparency of algorithms, as well as for robustness, accountability and, when relevant, human oversight and unbiased outcomes. These are particularly important for enforcement and to build trust in the use of those technologies. The Commission suggests that one way of tackling this challenge would be imposing obligations on developers of the algorithms to disclose the design parameters and metadata of datasets in case accidents occur. The scope of such an obligation would no doubt be a concern for proprietors of valuable IP and trade secrets in healthcare, advertising and consumer products.
  8. While existing product safety legislation takes into account the safety risks stemming from software integrated in a product at the time of its placing on the market and, potentially subsequent software updates foreseen by the manufacturer, specific and/or explicit requirements on stand-alone software could be needed e.g. an ‘app’ that would be downloaded. Particular considerations should be given to the stand-alone software, ensuring safety functions in the AI products and systems. Additional obligations may be needed for manufacturers to ensure that they provide features to prevent the upload of software having an impact on safety during the lifetime of the AI products. Such changes to the law would no doubt increase the cost and post-production monitoring burden of producers of healthcare devices and consumer products with such AI capability and functionality.
  9. Although the Product Liability Directive’s definition of product is broad, its scope could be further clarified to better reflect the complexity of emerging technologies and ensure that compensation is always available for damage caused by products that are defective because of software or other digital features. This would better enable economic actors, such as software developers, to assess whether they could be considered producers according to the Product Liability Directive. Such a change could have a significant impact on producers whose products do not currently fall within the Product Liability Directive regime.
  10. The Commission is seeking views whether, and to what extent, it may be necessary to mitigate the consequences of complexity by alleviating/reversing the burden of proof required by national liability rules for damage caused by the operation of AI applications, through an appropriate EU initiative.
  11. Closely related to the issues raised above are the product safety issues regarding opacity and autonomy, the notion of ‘putting into circulation’ that is currently used by the Product Liability Directive could be revisited to take into account that products may change and be altered. This could also help to clarify who is liable for any changes that are made to the product. From the perspective of certainty, this could be a welcome change for producers of products in the healthcare, consumer and advertising sectors. However, the benefits of certainty could be significantly diminished with increased burdens for producers of such products.
  12. For the operation of AI applications with a specific risk profile, e.g. fully autonomous vehicles, drones and package delivery robots, or AI-based services with similar risks like traffic management services guiding or controlling vehicles or management of power distribution, the Commission is seeking views on whether, and to what extent, strict liability, as it exists in national laws for similar risks to which the public is exposed may be needed in order to achieve effective compensation of possible victims. The Commission is also seeking views on coupling strict liability with a possible obligation to conclude available insurance, following the example of the Motor Insurance Directive, in order to ensure compensation, irrespective of the liable person’s solvency and to help reduce the costs of damage.
  13. For the operation of all other AI applications, which would constitute the large majority of AI applications, the Commission is reflecting on whether the burden of proof concerning causation and fault needs to be adapted. In this respect, one of the issues flagged by the Report from the Expert Group on Liability and New Technologies is the situation where the potentially liable party has not logged the data relevant for assessing liability, or is not willing to share such data with the victim.

Conclusion

In summary, the Commission has found that while the existing laws of Member States are able to cope with emerging technologies, these laws are not specifically applicable to the development of new digital technologies like AI, the IoT and robotics. This raises new challenges in terms of product safety and liability, including connectivity, autonomy, data dependency, opacity, complexity of products and systems, software updates and more complex safety management and value chains. The Commission is now clearly considering certain adjustments to the Product Liability Directive, the General Product Safety Directive and national liability regimes through appropriate EU initiatives.

For more information, please contact a member of our Product Regulation & Consumer Law team.


The content of this article is provided for information purposes only and does not constitute legal or other advice.



Share this: