Skip to main content
Share via Share via Share via Copy link

EU Artificial Intelligence (AI) Act: Guide for your business

05 August 2024

On 12 July 2024, the EU AI Act (AI Act) was published in the Official Journal of the European Union.

This guide sets out the key issues that you need to think about to assess how the AI Act may apply to you and your business.

EU Artificial Intelligence Act summary

The AI Act is aimed at regulating the sale and use of AI within the EU. It is a long anticipated and complex regulation, which will come into force in a phased manner.

The AI Act is a proportionate risk-based regulation and not all obligations apply in all circumstances.

Timeline of enforcement

The AI Act formally enters into force, with a two-year phased commencement period.

The rules on Prohibited AI (set out below) commence. These systems include those which use deceptive or subliminal techniques to alter human behaviour.

The rules on AI Literacy commence. AI literacy involves ensuring key stakeholders such as employees are trained to use AI where appropriate.

The EU AI office is to have published codes of practice which are designed to assist providers of AI Systems ensure compliance before their upcoming deadlines.

The rules on General Purpose AI commence together with the AI Act’s provisions on penalties for non-compliance.

EU Commission must issue guidance related to high-risk AI Systems.

Commencement of certain rules on high-risk AI systems.

Commencement of remaining rules on high-risk AI systems.

Pillars of the EU AI Act

  • The higher the risk of a breach of the EU Charter of Fundamental Rights, the higher the regulatory burden the AI Act imposes.
  • The AI Act tailors the form and intensity of AI rules to the risks that AI systems can generate.
  • The spectrum from unacceptable risk to minimal or no risk is as follows:
    • Prohibited AI.
    • High Risk AI.
    • General Purpose AI.
    • Limited Risk AI.
  • The key focus of the AI Act is on Prohibited and High-Risk AI. While the regulation takes a proportionate approach overall to the regulation of AI, the AI Act casts the “high-risk” net quite broadly which could include a significant number of commercial use cases.
  • The four classifications above are not mutually exclusive, meaning that elements of the same AI System could be both classified as “Limited Risk” but also “High Risk” where parts of that system are opaque to the outside world. In this situation a “Limited Risk” system might be subject to additional transparency obligations. This adds a layer of complexity to the classification of these systems.
AI and a risk-based approach

The AI Act adopts several principles which should frame how the more granular black and white obligations are implemented as follows:

  • Human agency and oversight.
  • Technical robustness and safety.
  • Privacy and data governance.
  • Transparency.
  • Diversity.
  • Non-discrimination and fairness.
  • Societal and environmental well-being and accountability.
Principles of AI
  • The AI Act, like the GDPR, provides for a now familiar extra-territorial effect.
  • Those who have no establishment in the EU may be subject to the AI Act based on the internal market principle in other words:
    • Where providers (whether inside or outside the EU) place on the market or put into action AI Systems in the EU, then those providers attract the jurisdiction of the AI Act.
    • Similarly, where the outputs of AI systems are used in the EU market, then the use of those outputs can trigger obligations under the AI Act.
Extraterritorial effect
  • The higher the risk of a breach of the EU Charter of Fundamental Rights, the higher the regulatory burden the AI Act imposes.
  • The AI Act tailors the form and intensity of AI rules to the risks that AI systems can generate.
  • The spectrum from unacceptable risk to minimal or no risk is as follows:
    • Prohibited AI.
    • High Risk AI.
    • General Purpose AI.
    • Limited Risk AI.
  • The key focus of the AI Act is on Prohibited and High-Risk AI. While the regulation takes a proportionate approach overall to the regulation of AI, the AI Act casts the “high-risk” net quite broadly which could include a significant number of commercial use cases.
  • The four classifications above are not mutually exclusive, meaning that elements of the same AI System could be both classified as “Limited Risk” but also “High Risk” where parts of that system are opaque to the outside world. In this situation a “Limited Risk” system might be subject to additional transparency obligations. This adds a layer of complexity to the classification of these systems.
AI and a risk-based approach

The AI Act adopts several principles which should frame how the more granular black and white obligations are implemented as follows:

  • Human agency and oversight.
  • Technical robustness and safety.
  • Privacy and data governance.
  • Transparency.
  • Diversity.
  • Non-discrimination and fairness.
  • Societal and environmental well-being and accountability.
Principles of AI
  • The AI Act, like the GDPR, provides for a now familiar extra-territorial effect.
  • Those who have no establishment in the EU may be subject to the AI Act based on the internal market principle in other words:
    • Where providers (whether inside or outside the EU) place on the market or put into action AI Systems in the EU, then those providers attract the jurisdiction of the AI Act.
    • Similarly, where the outputs of AI systems are used in the EU market, then the use of those outputs can trigger obligations under the AI Act.
Extraterritorial effect

Key contacts

Key contacts

Jeanne Kelly

Partner

jeanne.kelly@brownejacobson.com

+353 (85) 846 3955

View profile
Can we help you? Contact Jeanne

You may be interested in...