Internet Explorer 11 (IE11) is not supported. For the best experience please open using Chrome, Firefox, Safari or MS Edge

Understanding High-Risk AI Systems

The European Parliament published its amendments to the European Commission’s proposed AI Act in June 2023. Brian McElligott, Head of Artificial Intelligence, considers when and how an AI system may be considered high-risk. He also explores the consequent obligations which fall upon AI system providers and users in the light of these amendments.


We provide an overview of ‘high-risk’ AI systems as defined in the AI Act as proposed by the European Commission and in the light of the suggested amendments made by the European Parliament. We address three key questions:

  1. When is an AI system considered ‘high-risk’?
  2. Can an AI system become high-risk?
  3. What are the consequences of classifying an AI system as high-risk?

When is an AI system 'high-risk’?

The AI Act identifies three categories of high-risk AI systems:

  1. AI systems covered by EU harmonisation legislation (Annex II – Section A)
  2. AI systems covered by EU harmonisation legislation subject to third-party conformity assessment (Annex II – Section B), and
  3. AI systems listed as involving high-risk uses (Annex III)

The Parliament proposes amendments to category (3), suggesting that AI systems falling within the listed uses should only be considered high-risk if they pose a significant risk to health, safety, or fundamental rights. In addition, AI system providers should have the opportunity to submit a notification to assert that their AI system does not pose a significant risk.

Can AI systems become high-risk?

The analysis of an AI system’s intended purpose plays a crucial role in determining its risk level. The ‘intended purpose’ refers to the specific use for which the AI system is designed and is determined by the provider. Instructions for use, which are legally required for high-risk AI systems, should convey the intended purpose and proper use of the AI system to users. In addition, the reasonably foreseeable misuse of an AI system is relevant in determining its risk-classification. Providers of AI systems should conduct an initial risk assessment to determine an AI system's risk level. This should involve a consideration of, at least, the intended purpose of the AI system and its reasonably foreseeable misuse.

An AI system cannot generally become high-risk unless it has already been categorised as such prior to placing on the market or being put into service. However, substantial modifications to an AI system can lead to a change in its risk level. A ‘substantial modification’ is an unforeseen or unplanned change that affects an AI system's compliance with high-risk requirements or alters its intended purpose after its placing on the market or putting into service.

What are the consequences of classifying an AI system as high-risk?

The mandatory AI Act requirements for high-risk AI systems include:

  • A risk management system
  • Data governance and management
  • The development of technical documentation
  • Automatic record-keeping systems
  • Transparency and the provision of information to users
  • Guaranteeing human oversight, and
  • Ensuring accuracy, robustness and cybersecurity

Providers of high-risk AI systems are responsible for ensuring compliance with these requirements before placing such systems on the market or putting them into service. In addition, they must develop and maintain a quality management system and take corrective action if the AI system does not satisfy the AI Act requirements noted above.

Deployers of high-risk AI systems also have obligations. They have the same obligations as providers if they:

  • Apply their name or trademark to the high-risk AI system
  • Make substantial modifications to an AI system that remains high-risk, or
  • Modify an AI system that consequently becomes high-risk

Regarding any AI system, whether high-risk or not, deployers must:

  • Use it according to the instructions
  • Implement human oversight
  • Monitor robustness and cybersecurity measures
  • Inform users about the AI system, and
  • Conduct a fundamental rights impact assessment of high-risk AI systems

The determination of whether an AI system is high-risk or not depends on a number of variables and in particular its intended purpose and reasonably foreseeable misuse. Providers of AI systems should ensure that they document these issues in their initial risk assessment before placing on the market or putting into service any AI system. The outcome of this analysis will determine whether an AI system is high-risk or not and the consequent obligations on providers, as well as deployers, which flow therefrom.

If you have any questions about this article or the AI Act, please contact a member of our Artificial Intelligence team.

The content of this article is provided for information purposes only and does not constitute legal or other advice.

People Also Ask

What is a high-risk AI system?

The AI Act identifies three categories of high-risk AI systems:

  1. AI systems covered by EU harmonisation legislation
  2. AI systems covered by EU harmonisation legislation subject to third-party conformity assessment and
  3. AI systems listed as involving high-risk uses

The European Parliament proposes amendments to category (3), suggesting that AI systems falling within the listed uses should only be considered high-risk if they pose a significant risk to health, safety, or fundamental rights. In addition, AI system providers should have the opportunity to submit a notification to assert that their AI system does not pose a significant risk.

Can AI systems become high-risk?

An AI system cannot generally become high-risk unless it has already been categorised as such prior to placing on the market or being put into service. However, substantial modifications to an AI system can lead to a change in its risk level. A ‘substantial modification’ is an unforeseen or unplanned change that affects an AI system's compliance with high-risk requirements or alters its intended purpose after its placing on the market or putting into service.

What are the consequences of classifying an AI system as high-risk?

If an AI system is categorised as high-risk, then it is subject to certain AI Act requirements. In addition, the provider and deployer thereof are subject to certain obligations.

What is an AI system?

The European Parliament proposes to define an ‘AI system’ as “a machine-based system that is designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate outputs such as predictions, recommendations, or decisions, that influence physical or virtual environments”.



Share this: