The European Union Artificial Intelligence Act (the Act) is the world’s first comprehensive framework on Artificial Intelligence (AI). But what does the Act mean for insurers in the UK?
What does the Act do?
Risk based approach
The Act introduces an AI classification system that determines the level of risk an AI solution could present to individuals. The four levels of risk classifications are:
- Unacceptable risk – Application of AI that is banned within the European Union (EU), for example social scoring and monitoring of people and AI which manipulates human behaviour or exploits people’s vulnerabilities.
- High risk – Examples include AI that controls access to financial services, critical infrastructure or employment. High risk AI systems are subject to strict conformity assessment and monitoring. AI systems that profile individuals, for example that process personal data to assess various aspects of a person’s life such as health or economic situation, interests or behaviour.
- Limited risk – Examples include chatbots. They are subject to specific transparency obligations. For example, users should be aware that they are interacting with AI. There are also requirements to ensure that AI generated content is identifiable.
- Minimal risk – Examples include spam filters and AI enabled video games. Minimal risk is unregulated.
The majority of obligations under the Act relate to high risk AI systems.
The Act also establishes AI regulatory sandboxes for controlled innovation. Testing can also occur in the real world outside the sandbox with regulatory oversight.
It also limits the use of biometric identification systems by law enforcement.
High risk AI systems
There are a number of governance requirements for high risk AI systems, such as establishing risk management and quality management systems.
High risk AI systems must be designed:
- for record keeping so that certain events and substantial modifications are automatically recorded
- to allow for human oversight
- to achieve appropriate levels of accuracy, robustness and cybersecurity
Instructions for use must be provided to downstream deployers to enable their compliance.
The Act establishes a right for consumers to submit complaints about AI systems and receive explanations about decisions based on high risk AI that affect their rights.
General purpose AI
There are a number of requirements for general purpose AI (AI that has capability to serve a variety of purposes) such as providing information and documentation to downstream providers, establishing a policy to respect copyright and publishing a summary about the content used to train the model.
Free and open licence general purpose AI model providers just need to comply with the copyright and training data summary requirements, unless they present a systemic risk.
Implementation
The Act will be implemented in stages, after entry into force:
- Six months for prohibited AI systems.
- 12 months for General Purpose AI.
- 24 months for high risk AI systems under Annex III.
What does this mean for EU insurers?
Some markets have been labelled high risk and some technologies closed off. Classification as high risk means that the use of AI will be subject to stricter requirements.
The Act lists the use of AI systems used for risk assessment and pricing in life and health insurance as high risk AI systems. This is because it could have a significant impact on a persons’ life and health, including financial exclusion and discrimination.
However, the Act says systems used for the purpose of detecting fraud in financial services and for prudential purposes to calculate credit institutions’ and insurances undertakings’ capital requirements should not be considered high risk. AI systems used to evaluate creditworthiness are otherwise classified as high risk. However, the Act will still have an impact on how AI can be used to detect fraud. For example, biometric data use has been classified as high risk or completely prohibited.
Fundamental rights impact assessments are only required for bodies which are governed by public law, private actors providing public services and banking and insurance providers using AI systems listed as high risk. The aim of the fundamental rights impact assessment is for the deployer to identify the specific risks to the rights of individuals or groups of individuals likely to be affected and to identify measures to be taken in the case of a materialisation of those risk.
Financial service regulators are designated within their respective competences as competent authorities for the purpose of supervising and implementing the Act, unless member states decide to designate another authority to fulfil these market surveillance tasks.
What does this mean for the UK and UK insurers?
A number of businesses and insurers operate in both jurisdictions. The Act applies to those that intend to place on the market or put into service AI systems in the EU, regardless of whether they are based in the EU or a third country. It also applies to third country providers where AI system’s output is used in the EU.
The UK currently relies on existing insurance laws and regulations, which are broad enough to apply to new technologies. It has implemented a cross-sector and outcome-based framework for regulating AI, underpinned by five core principles:
- Safety, security and robustness
- Appropriate transparency and explainability
- Fairness
- Accountability and governance
- Contestability and redress.
With the Act the EU is hoping to play a leading role globally. The UK is holding off introducing legislation until the challenges and risks of AI are better understood, the Government says:
“We recognise the need to build a stronger evidence base before making decisions on statutory interventions. In doing so, we will ensure that we strike the right balance between retaining flexibility in our iterative approach and providing clarity to businesses”
Legislation in this this area is expected in the future. The EU example may influence future legal developments in the UK, particularly if it proves to be successful.
UK regulators such as the Financial Conduct Authority currently have a large amount of autonomy in how they approach AI. As a result, AI technologies are regulated through a complex patchwork of legal requirements. This current patchwork of legal frameworks is unlikely to sufficiently address the risks that AI can pose.
The Government says:
“Instead of giving responsibility for AI governance to a new single regulator, the government will empower existing regulators - such as the Health and Safety Executive, Equality and Human Rights Commission and Competition and Markets Authority - to come up with tailored, context-specific approaches that suit the way AI is actually being used in their sectors.”
In a White Paper dated March 2023 ‘A pro-innovation approach to AI regulation’ the Government details its plans for implementing AI regulation (UK AI Paper).
The UK AI Paper says that the Government received feedback from industry that the absence of cross-cutting AI regulation creates uncertainty and inconsistency which can undermine business and consumer confidence in AI, and stifle innovation. It also acknowledges that some AI risks arise across, or in the gaps between existing regulatory remits.
The UK AI Paper says:
“Our framework is context-specific. We will not assign rules or risk levels to entire sectors or technologies. Instead, we will regulate based on the outcomes AI is likely to generate in particular applications. For example, it would not be proportionate or effective to classify all applications of AI in critical infrastructure as high risk. Some uses of AI in critical infrastructure, like the identification of superficial scratches on machinery, can be relatively low risk. Similarly, an AI-powered chatbot used to triage customer service requests for an online clothing retailer should not be regulated in the same way as a similar application used as part of a medical diagnostic process. A context-specific approach allows regulators to weigh the risks of using AI against the costs of missing opportunities to do so… To best achieve this context-specificity we will empower existing UK regulators to apply the cross-cutting principles. Regulators are best placed to conduct detailed risk analysis and enforcement activities within their areas of expertise. Creating a new AI-specific, cross-sector regulator would introduce complexity and confusion, undermining and likely conflicting with the work of our existing expert regulators.”
Following implementation of the UK’s new framework to regulate AI, joint guidance on AI produced by the Financial Conduct Authority (FCA) and other relevant regulatory authorities is expected. This will increase clarity on the regulatory requirements relevant to AI and on how to satisfy those requirements in the context of insurance, including consumer services and products.
Annex A of the UK AI Paper sets out factors that the government believes regulators may wish to consider when providing guidance/ implementing each of the five core principles. The FCA will be considering these factors in determining its approach to AI. A full list can be found from page 68 of the UK AI paper. We have selected some highlights as follows:
- “Set explainability requirements, particularly of higher risk systems, to ensure appropriate balance between information needs for regulatory enforcement (e.g. around safety) and technical tradeoffs with system robustness.”
Read our previous deep dive into explaining artificial intelligence use to insurance customers. - “Interpret and articulate ‘fairness’ as relevant to their sector or domain.”
- “Decide in which contexts and specific instances fairness is important and relevant (which it may not always be).”
- “Design, implement and enforce appropriate governance requirements for ‘fairness’ as applicable to the entities that they regulate.”
- “Where a decision involving use of an AI system has a legal or similarly significant effect on an individual, regulators will need to consider the suitability of requiring AI system operators to provide an appropriate justification for that decision to affected parties.”
- “AI systems should comply with regulatory requirements relating to vulnerability of individuals within specific regulatory domains. Regulators will need to consider how use of AI systems may alter individuals’ vulnerability, pursuant to their existing powers and remits.”
The Financial Times has recently reported that the Government may already be rethinking its approach and legislating AI following alarm over its potential risks.
Contents
- The Word, April 2024
- Chubb ordered to indemnify SXSW for Covid cancellation
- New guidance on supporting autistic customers for the insurance industry
- FCA and FOS set out strategic plans for 2024 and beyond
- The space data revolution
- The Baltimore bridge collapse: One of the biggest losses in maritime insurance history?
Key contact
Tim Johnson
Partner
tim.johnson@brownejacobson.com
+44 (0)115 976 6557
Discover more
You may be interested in...
Legal Update
Bottling success: Regulatory reforms for Wine Regulations in 2024
Legal Update
Label with care: The updated rules for marketing ‘No and Low Alcohol Drinks’
Legal Update
Upcoming reforms to the Novel Food regulations
Legal Update
What does the draft terrorism (Protection of premises) bill (Martyn’s Law) mean for insurers?
Legal Update
The benefits of good governance in sport
Press Release
New resource will support university innovation
Legal Update
The European Accessibility Act: Inclusive products and services
Legal Update - Consumer Duty
The Financial Conduct Authority’s approach to AI regulation
Published Article - Consumer Duty
General insurance claims: The Consumer Duty’s easy target?
Press Release
Browne Jacobson advises LDC on investment in performance doorset specialist IDSL
Legal Update
A reflection of FIMA Connect 2024
On-Demand - Shared Insights
Duty of Candour review: Submission to the Department of Health and Social Care
Legal Update
The EU AI Act: What does it mean for insurers?
Legal Update
Forest Risk Commodities regulations: Steps food businesses should take
Legal Update
Subsidy control guidance update - welcome guidance on 'small subsidies' introduced
Legal Update
Cyber-attacks in UK universities: Why failing to prepare is no longer an option
Legal Update
Artificial intelligence – shaping a sustainable future
Legal Update
The regulators’ pet project
Legal Update
Adapting to change or falling behind? The FCA under fire from the National Audit Office
Legal Update
Climate change: what action should we take?
Legal Update
Data protection in higher education: what to expect in 2024
Legal Update
Installing EV charge points on university campuses
Legal Update
New building control regime for higher-risk buildings
Legal Update
Customers in financial difficulty: Cost of living crisis and the FCA
Legal Update - Consumer Duty
Insurance industry Consumer Duty update – Fair value, FCA Dear CEO letters and multi-occupancy buildings
Legal Update
Mind the GAP - FCA warning to GAP insurers
Guide
Transforming EDI practices in UK insurance
Legal Update
“TOBA traps” - general exposure risk under existing TOBAs
Published Article
Three peaks of consumer protection: Part two — intolerable harm
Legal Update - School leaders survey
School leaders survey Summer 2023 – the results are in!
Legal Update
The AI product boom: risks and opportunities for insurers
On-Demand - Shared Insights
Shared Insights: data and information governance issues
Published Article
The three peaks of customer protection: How ‘operational resilience’ enables compliance with the ‘Consumer Duty’ and ‘Vulnerable Customers’
Legal Update
UK Government publishes the Online Safety Bill: an overview
Press Release
Browne Jacobson collaborates with The GLAA and University of Nottingham to tackle modern slavery and human trafficking
Opinion
The Solicitors Regulation Authority has approval to take over from the Solicitors Indemnity Fund
Legal Update
Government publishes its proposals for expanding the Scope of the Network and Information Systems Regulations 2018
Opinion
Staying warm at work
Legal Update - ESG in 3D
ESG in 3D, December 2022
Opinion
Don't look down
An engineering company in Tyne and Wear was fined £20,000 after a worker fractured his pelvis and suffered internal injuries after falling through a petrol station forecourt canopy, whilst he was replacing the guttering.