Skip to main content
Share via Share via Share via Copy link

Step 4: Identify whether you are engaging with high-risk AI

05 August 2024

High-risk AI practices are dealt with in Chapter 3 of the AI Act and there is a list of areas where deployment of AI constitutes a high-risk in Annex III of the AI Act. At a high level, these areas are as follows:

  • Biometric identification and categorisation of natural persons.
  • Management and operation of critical infrastructure.
  • Education and vocational training.
  • Employment, workers management and access to self-employment.
  • Access to and enjoyment of essential private services and public services and benefits.
  • Law enforcement.
  • Migration, asylum, and border control management.
  • Administration of justice and democratic processes.

The list of deemed high-risk applications in Annex III of the AI Act is not exhaustive, and there are further areas subject to being classified as “high-risk” if they fall within the meaning of Article 6(1) of the AI Act. In that Article, there are two cumulative criteria for determining high-risk applications:

1. The AI system is intended to be used as a safety component of a product, or the AI System is itself a product, covered by the Union harmonisation legislation listed in Annex I.

and


2. the product whose safety component pursuant to point (a) is the AI system, or the AI system itself as a product, is required to undergo a third-party conformity assessment with a view to the placing on the market or putting into service of that product pursuant to the Union harmonisation legislation listed in Annex II.

This definition is cumulative in nature with two criteria to be satisfied in order to be classified as a high-risk use of AI. This potentially narrows the scope of the concept due to the two terms being interrelated. In simple terms, an AI system must fall within both cumulative criteria in order to be deemed high- risk. It should be noted, (as above) however, that the list of legislation in Annex II of the AI Act is lengthy.

Exemption: significant risk of harm

There is an exemption to the rule on high risk AI in Annex III for AI which does not pose a significant risk of harm to the health, safety or fundamental rights of natural persons including any use which will materially influence the outcome of decision making.

Importantly, only one of the following lists of conditions needs to apply for a high-risk AI obligation exemption:

  • The AI system is intended to perform a narrow procedural task;
  • The AI system is intended to improve the result of a previously completed human activity;
  • The AI system is intended to detect decision-making patterns or deviations from prior decision- making patterns and is not meant to replace or influence the previously completed human assessment, without proper human review; or
  • The AI system is intended to perform a preparatory task to an assessment relevant for the purposes of the use cases listed in Annex III.
  • Importantly, an AI system will always be high risk if it engages in profiling of natural persons.

Key contact

Key contact

Jeanne Kelly

Partner

jeanne.kelly@brownejacobson.com

+353 (85) 846 3955

View profile
Can we help you? Contact Jeanne

You may be interested in...