In April 2024, the Financial Conduct Authority (FCA) published its response to the UK Government’s policy paper on its approach to AI regulation.
The UK Government identified five principles as key when it comes to regulating AI in the UK:
- safety, security, robustness
- appropriate transparency and explainability
- fairness
- accountability and governance; and
- contestability and redress.
The principles apply to the use cases of the technology as opposed to the technology itself.
At the current time, the UK Government has decided that the imposition of legislative requirements on businesses might stifle innovation. It has therefore determined that, for the time being, the principles will be issued on a non-statutory basis and will be implemented by regulators.
The approach by the UK Government is in contrast to that taken by the EU where the latter has not only implemented legislation but it has also regulated the AI technology itself and gone even further to ban certain types of AI technology.
The FCA’s response
The FCA has responded to the UK Government’s policy paper making it clear that it will work in harmony with the UK Government’s objectives with the safe and responsible use of AI as a key pillar whilst driving beneficial innovation.
Whilst the current regulatory approach will be targeted at use cases, the FCA has not held back from stating that it would not go further.
“…our regulatory approach is to identify and mitigate risks to our objectives, including from regulated firms’ reliance on different technologies and the harms these could potentially create for consumers and financial markets. In practice, this means that when we consider regulated firms’ use of any given technology, such as AI, blockchain, cloud infrastructure etc., we objectively assess the risks and any adverse implications for our objectives and the regulatory outcomes we are seeking. The includes considering the impact the use of technologies can have at the level of the market.
The principle of proportionality also informs out thinking and approach to AI, including any potential future regulatory interventions. This is one of the regulatory principles under the Financial Services and Markets Act 2000 (FSMA) that the FCA must have regard to when discharging its general functions…”
Rather than introducing any new regulatory requirements, the FCA has for the time being mapped the existing regulatory framework onto the principles. For example:
Safety, security, robustness
- The FCA’s Principles for Businesses provide a general statement of the fundamental obligations of firms and other persons to whom they apply. Under the Principles, firms must conduct their business with due skill, care and diligence (Principle 2) and take reasonable care to organise and control their affairs responsibly, effectively and with adequate risk management systems (Principle 3).
- A number of the FCA’s Threshold Conditions (which represent the minimum conditions to be satisfied by firms with a Part 4A permission) are also relevant such as the requirement that a firm’s business model must be suitable.
- There are more specific rules and guidance relating to systems and controls under the Senior Management Arrangements, Systems and Controls (SYSC) sourcebook which apply to different categories of firms. These include provisions related to risk controls under SYSC 7 as well as general organisational requirements under SYSC 4, including requirements for relevant firms to have sound security mechanisms in place relating to data, as well as requirements related to business continuity under SYSC 4.1. SYSC 15A should also be noted which aims to ensure relevant firms are able respond to, recover, learn from and prevent future operational disruptions.
- SYSC 8 and SYSC 13 (in respect of insurers) contain specific rules and guidance on outsourcing, including in relation to operational risk.
- Guidance is also included in the FCA’s 'Guidance for firms outsourcing to the ‘cloud’ and other third party IT services'.
Appropriate transparency and explainability
- The FCA recognises that its regulatory framework does not currently specifically address the transparency and explainability of AI systems. Reliance is currently placed on its approach to consumer protection.
For more on the explainability of AI systems see our article:
Explaining artificial intelligence use to insurance customers
Fairness
- The FCA’s regulatory approach to fairness is mapped predominantly to its approach to consumer protection being a combination of the FCA’s Principles for Businesses and other high level rules, detailed rules and guidance including the Consumer Duty.
- The Guidance for firms on the fair treatment of vulnerable customers also sits under the FCA’s Principles for Businesses.
- A number of the FCA’s Threshold Conditions are also relevant such as the requirements relating to a firm’s suitability and business model.
- The FCA is also relying on statutory legislation such as the Equality Act 2010 to ensure compliance with this principle along with enforcement by other regulatory bodies such as the Information Commissioner’s Office in respect of the processing of personal data.
Accountability and governance
- The FCA is relying on the rules set out in its Threshold Conditions, Principles for Business and also the SYSC sourcebook to ensure compliance with the principle of accountability and governance.
- The FCA’s view is that the Senior Managers and Certification Regime is also applicable to the principle of accountability and governance in terms of its relevance to the safe and responsible use of AI. The Regime is currently under review by the FCA.
- In addition, there is also a requirement under the Consumer Duty for firms to ensure that their obligation to act to deliver good outcomes for retail customers is reflected in their strategies, governance and leadership.
Contestability and redress
- The FCA has mapped this principle to Chapter 1 of the Dispute Resolution Complaints Sourcebook which contains rules and guidance for firms in how to deal with complaints.
- In addition the FCA is also placing reliance on Article 22 of the UK GDPR under which data subjects have a general right not to be subjected to automated decisions. The FCA recognises that where there are exceptions to this, safeguards must be put in place including the right to contest automated decisions.
Scope for further regulation
The FCA has not ruled out further regulation coming down the track. At the moment it is exploring in further detail the full capability of AI/LLMs so that any future regulation is tailored to address and mitigate identified risks in an appropriate and proportionate manner reflecting a pro-innovation approach.
Other regulators are also likely to implement further rules and guidance which will equally be applicable to firms regulated by the FCA. For example the ICO has published its guidance on AI and data protection.
The Bank of England, PRA and FCA are currently assessing their approach to critical third parties (which amongst other service providers are intended to include large providers of tech/tech services). There are proposed requirements to manage the potential risks to the stability of, or confidence in, the UK financial system that may arise due to a failure in, or disruption to, the services that critical third parties provide. It is not a regime specific to AI but is broad enough to encompass some of the principles of the UK Government’s policy paper on AI.
There will also inevitably be an element of self regulation by the industry. For example the Association of British Insurers has produced a useful guide to insurance firms on AI, applying the five guiding principles in an insurance context
Conclusion
Whilst the existing regulatory landscape goes some way to ensuring implementation of the principles set out in the UK Government’s policy paper the FCA recognises it is not fully there yet. Therefore we anticipate further regulation and legislation down the line through FCA collaboration with other regulatory bodies such as the Information Commissioner’s Office, the Payment Services Regulator and also the Bank of England.
The Digital Regulation Cooperation Forum (DRCF) has also been established which brings together four UK regulators (FCA, Competition and Markets Authority, the Information Commissioner’s Office and the Office of Communications) to deliver a coherent approach to digital regulation.
The legislative and regulatory landscape is therefore evolving and emerging and firms should keep a close eye on it to ensure they do not inadvertently find themselves in breach leading not only to potential sanctions but also wasted time, costs and effort on AI procurements that are not in compliance with the regulatory regime.
Contents
- The Word, May 2024
- The FCA comments on competition between big tech firms and financial service firms
- Final FCA anti-greenwashing rule guidance published
- The European Accessibility Act: Inclusive products and services
- From products to protection: The rise of embedded insurance
- Updated UK Consumer Wordings Guidance: Browne Jacobson collaborates with the LMA