Internet Explorer 11 (IE11) is not supported. For the best experience please open using Chrome, Firefox, Safari or MS Edge

Upon taking office late last year EU Commission President Ursula von der Leyen pledged publically that in her first 100 days in office she would put forward legislation for a coordinated European approach on the human and ethical implications of artificial intelligence. The High Level Expert Group set up to advise the EU Commission on ethical and trustworthy AI had cautioned against imposing any one-size-fits-all rules on low-risk AI applications, in particular, on the basis that many are still in an early stage of development. The logic being, if we don’t understand fully the scope of application and reach of this AI how can we regulate the sector? This pro-regulation but cautious approach to the scope of regulation seems to reflect market views also (see the timely recent interview by Sundar Pichai of Google).

That planned deadline for publication of von der Leyen’s draft legislation will pass in February and a recently leaked draft Commission white paper from mid-December 2019 gives an insight into what we might expect next month. The focus will be on process rather than on achieving specific results.

The White Paper

Some of the most interesting parts of the leaked white paper focus on:

  • The definition of AI (Section C)

  • Possible types of obligations (Section E), and

  • Possible regulatory options (Section F)

The definition of AI

A key issue for any regulatory framework is the scope of the definition of its key object. It is asserted in the draft white paper that for AI it is best defined by looking at its functions. A functional definition of AI should look at the characteristics that differentiate AI from more general terms, such as software. Software is more broadly defined under existing EU legislation as programs in any form including those which are incorporated into hardware. With little further explanation it is assumed from there that AI could be defined as software, either integrated in hardware or self-standing, which provides for the following functions:

  • Simulation of human intelligence processes, such as learning, problem solving, reasoning and self-correction
  • Performing certain specified complex tasks, such as visual perception, speech recognition, decision making and translation with a degree of autonomy, including through self-learning processes, and
  • Involving the acquisition, processing and rational or reasoned analysis of data, typically in large quantities

It is expected that this form of a definition for AI will be sufficiently flexible to accommodate technical progress and future innovations while providing the necessary legal certainty.

Possible types of obligations

The proposed focus here is on process – reducing ex ante risks or those risks that can be forecasted, for example, through process requirements, including transparency and accountability that shape the design of AI systems. It also concentrates on establishing liability and possible remedies ex post, ie requirements on redress, remedies, rather than on achieving specific results such as specifying that AI shall not discriminate.

Cited examples of the ex-ante requirements include:

  • Accountability and transparency requirements for developers, as part of an ex post mechanism for enforcement, to disclose the design parameters of the AI system, metadata of datasets used for training, conducted audits etc. The potential broad scope here might seem unpalatable to IP owners and lawyers alike!)

  • General design principles for developers to reduce the risks of the AI system

  • Requirements for users regarding the quality and diversity of data used to train AI systems

  • Obligation for developers to carry out an assessment of possible risks and to take steps to minimise them; as well as obligations to keep records of these assessments and the steps to mitigate the risks

  • Requirements for human oversight or a possible review of automated decisions by AI by a human as regards non-personal data (to complement the obligations for the automated decision making under GDPR), and

  • Requirements addressing the changes to the product during its lifecycle that could affect safety of the product (eg machine learning and software updates)

Cited examples of the ex poste requirements include:

  • Requirements on liability for harm/damage caused by a product or service relying on AI, including the necessary procedural guarantees (possibility differentiating between high-risk and low-risk applications); and
  • Requirements on enforcement and redress for individuals and undertakings, including access to existing alternative online dispute resolution systems.

Possible regulatory options

Given the variety of risks covered, the Commission is looking at the following five regulatory options:

  1. Voluntary labelling – including a legal instrument setting out a voluntary labelling framework for developers and users of AI ie voluntary compliance with requirements for legal and ethical AI results in the right to use the label of “ethical/trustworthy AI”.

  2. Sectorial requirements for public administration and facial recognition – targeted at use of AI by public authorities with the expectation that it could then have an important signalling effect on the private sector.

  3. Mandatory risk-based requirements for high risk applications – legally binding requirements for developers and users of AI which build on existing EU legislation. A risk based approach is recommended ie apply only to high-risk applications of AI and not add any new additional administrative burdens on low-risk applications.

  4. Safety and liability – it would be appropriate to consider targeted amendments to the EU safety and liability legislation (including the General Product Safety Directive, the Machinery Directive, the Radio Equipment Directive and the Product Liability Directive) to address the specific risks of AI.

  5. Governance – a strong system of public oversight which would consist of national authorities that would be entrusted with the implementation and enforcement of the future regulatory framework.


How much of any of the draft white paper recommendations will make it into the anticipated draft legislation is a matter for speculation. It is possible the paper has moved on substantially since that December draft.

It is reasonable to expect that the draft AI legislation will contain a definition for AI and that alone will make for interesting reading as it will set the scope for application of this and future legislation. It is also interesting to note the number of high profile articles in the press leading up to the publication of this legislation. It’s safe to assume that this legislation could be as heavily lobbied as the EU Copyright Directive and given its potential reach, not without reason.

For more information on this topic and how the adoption of AI could impact your business, contact a member of our Intellectual Property or Technology team.

The content of this article is provided for information purposes only and does not constitute legal or other advice.

Share this: