Skip to main content
Share via Share via Share via Copy link

Explaining artificial intelligence use to insurance customers

28 March 2024

How can insurers and insurance intermediaries explain their artificial intelligence (“AI”) use in a way that is meaningful to customers? If insurers and brokers do not meet legal requirements around explaining decisions made or assisted by AI and treating people fairly then regulatory action could result.

The Information Commissioner’s Office (“ICO”) has produced an AI toolkit which says:

“The lack of transparency, interpretability and/or explainability is caused by choices about how an AI system is designed and developed. As a consequence, individuals lack the understanding about how their data is being used, how the AI system affects them, and how to exercise their individual rights.” This entry in the toolkit references UK General Data Protection Regulation (GDPR) Articles 5(2) and 24 and Recitals 39 and 74.

Explainability refers to whether the AI model and its output can be explained at an acceptable level in a way which “makes sense” to a human being. 

The Department for Science, Innovation & Technology (DSIT) says:

“AI systems should be appropriately transparent and explainable”

AI models need to be developed in an explanation aware way. Insurers who make ‘explainability’ a key requirement will also have better oversight of what their system does and why. Considering how to explain AI use should not be left to the last moment, once the system has already been developed. 

ABI’s AI Guide, which was released in February 2024, says:

“When building an in-house model, we recommend that you use the simplest model in terms of technique and variables. There is often a trade-off between accuracy and explainability, therefore, different techniques and models should be tried to ensure there is an optimal balance between these two considerations.”

Insurers and brokers need to be able to provide explanations to customers on why they are using AI and how the customer could be affected. Explanations need to be tailored to the skills and needs of different customer groups, with particular thought given to those in “vulnerable” groups. All players in the insurance distribution chain need to ensure that they are considering the interests of their customers, and explanations of AI assisted decisions will help ensure that use of AI is humancentric. 

Key issues involve:

  1. ensuring that explanations are accurate, relevant and understandable;
  2. identifying when statements on AI use need to be provided to enable customers to make decisions;
  3. measuring explainability so that insurers know that information is clear;
  4. how to clearly communicate how decisions are made when AI systems are complex and difficult to understand; and
  5. how to communicate where AI outputs are part of a wider process and human intervention takes place. 

AI Guide

What legislation and regulations apply to insurers explanations of AI use to customers? 

Where AI involves the use of personal data it will trigger compliance with data protection laws. Decisions that AI makes about individuals, even if they are only predictions or inferences, are classed as personal data. 

There are clear rights given to individuals to request information, challenge and correct decisions made by AI systems that process their personal data. 

Individuals have the right not to be subject to decisions about them based solely on automated processing (i.e. without any active human intervention), including profiling, which produces legal or similarly significant effects (see Article 22 GDPR). Some decisions made by insurers could be considered as producing ‘similarly significant effects’, such as declining to offer insurance cover or refusing an insurance claim. This could have significant effects on an individual’s financial circumstances. 

Further, decisions that might have little impact generally may have significant impacts on those more vulnerable. The ICO recognises this. The right to obtain human intervention is highly relevant to explanations on AI.

GDPR Article 15 gives individuals the right to access information on the existence of solely automated decision-making producing legal or similarly significant effects. This includes meaningful information about the logic and the significance and envisaged consequences for the individual. Recital 71 provides interpretative guidance on rights related to automated decision making and makes it clear that individuals have the right to obtain an explanation of a solely automated decision after it has been made. 

Processes should be also made available to customers to allow them to flag inaccurate information used in AI decision-making and have it corrected (See GDPR articles 16 and 17). However, there are clear difficulties in putting customers in a position to be able to identify inaccurate data used in back-end processes. Any inaccurate information should be updated to ensure accurate outcomes.

GDPR accountability principles mean that customers must have mechanisms to allow them to challenge AI decisions and outcomes which negatively impact them. When a decision is made by a human, it is clear that explanations about why a particular decision was made can be obtained from that individual(s). Where AI is involved, responsibility for decisions can become less clear. There should be no loss of accountability for decisions when they are made with AI assistance. Those accountable for AI systems and any ‘humans in the loop’ should be able to explain decisions. If a decision is not what a customer wanted or expected, clear and effective explanations of AI allow them to assess whether they believe the reasoning of a decision to be flawed.

GDPR transparency requirements include the need to provide meaningful information about the logic, significance and envisaged consequences of AI decisions. In cases where there is a “human in the loop” transparency requirements still need to be complied with. Insurers should consider information about the decisions or recommendations made by the system and how this informs the human decision. (see GDPR Articles 13, 14 and 15 for further information on providing meaningful information about the logic involved in automated decision making).

The Equality Act 2010 is highly relevant. If insurers are using AI in decision making processes they need to ensure, and be able to show, that such decisions do not result in unlawful discrimination.

The FCA’s Consumer Duty rules (FG22/5: Final non-Handbook Guidance for firms on the Consumer Duty (fca.org.uk)) include a requirement to act in good faith. AI must therefore support and aid customer understanding.

When and how should explanations be provided?

The ICO has produced guidance on explaining decisions made with AI.

The guidance notes the particular importance of explaining decisions where personal data is being used by the AI model.

Considerations on what to communicate to customers about AI include:

  • When and how is AI used?
  • What purpose is the AI system being used for?
  • What are the reasons that led to a decision?
  • Does the explanation need to be accessible and delivered in a non-technical way?
  • Does a contact for a human review of a decision need to be included?
  • What data has been used in a particular decision and how?
  • Does information need to be provided on training data?
  • Should information on what steps have been taken to ensure that the decisions the AI supports are unbiased and fair be included?
  • Should information on what steps have been taken to maximise the accuracy, reliability, security and robustness of decisions and behaviours be included?
  • Should information on steps taken to monitor the impacts of the use of an AI system and its decisions be included?
  • Is there any additional relevant information relating to specific decisions or types of decisions?
  • Who is the information being communicated to? Are they vulnerable?
  • What logic and processes have been used?
  • Is there any other information which supports the explainability of decision making and outcomes?
  • What needs to be communicated to assist in ensuring accountability of the AI?

Different customer groups will have different levels of knowledge and this will impact the detail and language that needs to be used.

Explaining the ‘why’ of an AI or AI decision will help people understand the reasons that led to a decision. This needs to be done in an accessible way. It is vital that individuals understand the reasons underlying the outcome of automated or AI assisted decisions to be able to effectively challenge decisions. Knowing the reasoning behind decisions allows customers to formulate coherent arguments on why they think that the decision was incorrect.

Is it possible for the insurer or broker to communicate the AI use in a way that is accessible to customers? Explainability refers to the extent to which it is possible for relevant parties to access, interpret and understand the decision making processes of an AI system. The adaptability of AI has the potential to make it extremely difficult to explain the intent or logic of a systems outcomes to customers. However, DSIT has said that - “in some cases, a decision made by AI may perform no worse on explainability than a comparable decision made by a human”

The potential impact of AI models also informs how comprehensive explanations need to be. What are the risks for a person receiving an AI or AI assisted decision? Real world examples help contextualise information for customers in insurance – helping customers understand what information means to them. Can these be used to help explain AI?

Communicating steps taken to maximise accuracy and reliability of decisions, steps taken to monitor decisions and unfair bias to customers is important to enable them to make informed choices about whether they want to contest an AI decision on the basis that it may be incorrect or carried out in an unsafe or unreliable manner. This is linked to a customer’s ability to challenge decisions on the basis of fairness. There are circumstances where insurers will need to be able to show that explanations have been provided to customers on AI use.

That said, insurers should be aware that research suggests that AI explanations can sometimes increase trust too much. This could decrease customers’ willingness to challenge decisions. There is a difficult balance to reach between providing information and not creating unfair trust in AI. Some commentators say that explaining how models work might distract people from figuring out what their customers really need to know.

There should be designated and capable human points of contact for individuals to query or challenge decisions. Staff who deal with customer queries regarding AI decisions and outcome need training and information at their disposal to explain AI outputs to customers.

To fulfil requirements to act in good faith, insurers should proactively make people aware of AI enabled decision making concerning them, in advance of making the decision. There should be meaningful explanations of decisions which are truthful, presented appropriately and are delivered at the right time.

The autonomy of AI means that organisations need to take steps to ensure accountability to put themselves in a position of compliance with GDPR’s accountability principle. Insurers should identify individuals within their organisations to manage and oversee the ‘explainability’ requirements of an AI decision system and assign ultimate responsibility for this. Evidence how your organisation is actively considering and making choices about how to design and deploy AI modes that are appropriately explainable to individuals.

We await more guidance from regulators to ensure an appropriate balance between information needs of customers, regulatory enforcement and technical system robustness.

Explainability of AI decision making processes is also important so that regulators can be provided with sufficient understandable information about AI systems and their inputs and outputs.

You may be interested in...