This article was first published by Healthcare Markets International
Recent developments in AI, including DALL-E 2 and ChatGPT, have reignited widespread excitement about the potential of artificial intelligence (AI) across all aspects of our lives. It has also seen an opening-up of the AI landscape, with commentators anticipating more rapid innovation because of industry democratisation. Charlotte Harpin, partner at Browne Jacobson outlines the risks and opportunities associated with the greater use of AI in healthcare.
Business adoption of AI has reportedly more than doubled since 2017, with evidence of robotic process automation; natural-language text understanding and deep learning being embedded in business settings. Generative-AI products, like ChatGPT, have also brought AI into people’s homes, in a very relatable and exciting way. It’s not hard to imagine how generative AI might form part of the health landscape, perhaps replacing ‘Dr Google’ with something more reliable.
In a healthcare context, a recent major trial of an AI programme that can predict when people might miss appointments and offer back-up bookings was announced, with a headline-grabbing title noting the expectation that this would “save [the NHS] billions”.
A helpful summary of the terminology involved in the AI/healthcare context can be found here: The Regulation of Artificial Intelligence as a Medical Device (publishing.service.gov.uk).
Potential legal risks that could arise in future regarding ongoing development and utilisation of AI technologies in healthcare
While novel areas of risk are tempting to focus upon when developing and utilising AI in healthcare, it’s important to recognise that risks can arise in all existing areas of law. The UK government has taken a light-touch approach to regulating AI and until we see the development of AI-specific legal frameworks, regulation is largely based on adapting existing laws to an AI context.
The most likely areas of risk are:
- Regulatory burden in terms of the development and deployment of medical devices that incorporate AI
- Clinical negligence claims
- Product regulation-related risks
- Data related issues and associated claims
- Copyright and IP/commercialization
- Public law challenges to decisions that have incorporated the use of AI, such as breach of the Equality Act 2010. [AI-bias is a recognised issue and eliminating bias in training is difficult, although the development of synthetic datasets may go some way to addressing]
- Employment law issues [such as redundancy claims following the deployment of AI-technological solutions]
- Practical challenges such as how patient safety incidents will be investigated where AI technologies have been involved.
More generally, there are issues around equity of access to novel technological solutions; how these will interface with existing NHS digital solutions; and the need to ensure service-user ‘buy-in’. These will all need to be factored in when considering the use of AI technologies in a healthcare setting.
Overall, there is a risk that the development and deployment of AI technology solutions is incompletely grounded in the wider legal framework where healthcare bodies operate. As part of developing organisational and/or system digital strategies, healthcare bodies need to ensure that they have an appropriate understanding of all areas of potential risk associated with developing and implementing AI technologies, to allow an informed risk-based decision to be taken.
Understanding proposed AI regulation
The AI regulatory landscape generally, and specifically in the healthcare setting, is complex and rapidly evolving. Brexit transition provides both a further degree of complexity but also an opportunity for the development and implementation of a UK designed framework.
Currently, medical devices that utilise AI are regulated under the Medical Devices Regulations 2002 (as amended).
The Medicines and Medical Devices Act 2021 (MMDA) was introduced to commence the post-Brexit transition to a UK sovereign regulatory regime. However, the timeframe for the introduction of secondary legislation under the MMDA has been delayed, with an extension to the transition standstill period.
This means there is more time to develop the details of the new regulatory framework, working to the Software and AI as a Medical Device Change Programme – Roadmap published in October 2022 by the MHRA and reflecting the UK Government’s stated “five pillars” to achieving a world-leading medical device regulatory framework, as follows:
- Strengthening MHRA power to act to keep patients safe
- Making the UK a focus for innovation, the best place to develop and introduce innovative medical devices
- Addressing health inequalities and mitigating biases throughout medical device product lifecycles
- Proportionate regulation that supports business through access routes that build on synergies with both EU and wider global standards
- Setting world leading standards – building the UKCA mark as a global exemplar
The Roadmap establishes a number of work packages, each of which has key deliverables, including the development of regulatory guidance around key issues such as:
- Risk-based classification, with a recognition around the need for flexibility to encompass novel devices [one of the major challenges within the AI landscape]
- Adverse incidents [this will be key to informing the evolution and application of existing negligence-based laws and includes adaptation of the Yellow Card system to reflect the use of AI technologies]
- Cyber security – one of the recognised key risks arising from the use of many medical devices and it will be interesting to see how this fits with existing legislative and sector requirements, including the Data Protection Act 2018; UK GDPR; and NHS England’s digital technology/NHS toolkit frameworks
In addition, other regulatory bodies will be involved depending on the nature of the AI-technology in question. In an attempt to ensure coordination, NICE, the CQC, the MHRA and the HRA have come together to form a Multi-agency advisory service (MAAS) for artificial intelligence and data driven technologies.
Some of the linked work being carried out by these other regulatory bodies neatly illustrates how wide-ranging the legal issues are:
- A project led by the HRA to streamline the review of AI and data-driven research and to modernise the technology platform used to make applications for approvals
- The HRA is also working to “streamline the review of research using confidential patient information without consent”, with the objective being to modernise the process “to enable a quicker and more robust oversight of projects and enhance the public visibility of approved studies”
- Validation of algorithms – the MHRA is leading on a synthetic data project that will help address issues around the development of algorithms against datasets that are difficult to access or obtain
Risks of non-compliance
The risks of non-compliance are significant, both in terms of direct impact and reputational damage. This is particularly the case in a healthcare setting where there has historically been resistance by service-users to the implementation of technological change.
The legal risks noted above are as yet relatively untested and this, coupled with the rapidly evolving regulatory framework, presents real challenge to those looking to develop and/or deploy AI technologies in a healthcare setting. However, this does not mean it is impossible to safely do so. Key to successfully navigating this landscape are the following:
- Active monitoring of the regulatory framework, ensuring an up-to-date understanding of the requirements
- Engagement in the various consultative processes underway, to ensure your voice is heard and reflected in the design of the regulatory framework [particularly in relation to the MHRA’s guidance, which is intended to include case studies]
- A holistic understanding of how AI-technologies will be deployed and a fully informed analysis of risk, to ensure informed decision-making. Depending on the context, this may need to include consideration of data; human rights and equality law considerations
- Utilise services like MAAS when looking to develop or deploy novel AI technological solutions
Discover more
You may be interested in...
Legal Update - Maternity services
University Hospitals of Derby and Burton NHS Foundation Trust invest in telemetry to improve maternity care and patient experience in labour
Opinion - Shared Insights
Browne Jacobson submits recommendations and proposals to the government for England’s 10 Year Health Plan
Legal Update - Maternity services
Informed consent and caesarean birth: RCOG launches new obstetrics animation
Legal Update
e-Evidence Package: Increased obligations on service providers to cooperate with data requests from law enforcement authorities
Legal Update
ABC pilot: Improving maternity safety to prevent brain injuries in childbirth
Press Release
Browne Jacobson launches new HR services offering in health and social care
Press Release - #PositivePurpose
Strolll: HealthTech for rehabilitation
Press Release
Browne Jacobson contributes to report promoting greater UAE-UK collaboration in cancer care
Legal Update - Shared Insights
Shared Insights round-up: Summer 2024
Published Article
Regulation HealthTech: What the Provider Selection Regime means for healthcare procurement
Opinion - Maternity services
Revolutionising patient care: Innovative kit for instant translation in 240 languages
Press Release
General election reaction from Browne Jacobson’s health and life sciences team
Opinion - Maternity services
New online system streamlines maternity services at The University Hospitals of Derby and Burton NHS Foundation Trust
Legal Update
UK election 2024: What are the healthcare promises made by the major political parties?
On-Demand - Shared Insights
Shared Insights Data: Strategies for handling cyber attacks and data breaches
Press Release
‘Privacy by design’ approach will help health and care organisations gain public trust in using technology as ICO publishes new guidance
Guide
Demystifying international healthcare contracting
Legal Update - Shared Insights
Shared Insights round up - Spring 2024
Press Release
Browne Jacobson Ireland LLP rank across Dispute Resolution, Intellectual Property and Technology in the Legal 500 EMEA 2024
Legal Update - Shared Insights
Shared Insights round up - Winter 2023
Legal Update
ICO consultation on transparency in health and social care
Published Article
PureHealth acquisition of Circle Health reflects growing opportunities between UK and Middle East
On-Demand
Copyright issues with AI webinar
Press Release
Browne Jacobson advises Care Fertility Group on acquisition of CRGW
On-Demand - Shared Insights
Shared Insights: Racial disparities in healthcare and the role of health technology in improving equity, increasing patient safety and reducing claims
Legal Update
ICO consultation on fertility tracking apps
Published Article
Investing in healthcare in Saudi Arabia under the new regulatory framework
Published Article
Digital channels and healthcare apps – the UK’s regulatory landscape, challenges for stakeholders and risk of clinical liabilities
Opinion
NHS announces artificial intelligence fund
On-Demand - Shared Insights
Shared Insights: data and information governance issues
Legal Update
New regulatory pathways announced for innovative medical technologies and internationally approved medicines
Published Article
Risks and opportunities arising through the use of AI in Healthcare
Press Release
Law firm launches new Health and Care Connect forum for the independent health and care sector
Legal Update
Government to expand network and information systems regulations
On-Demand
NSIA: the thorn in the side of M&A?
Published Article
Digital Twin Technologies: key legal contractual considerations
Legal Update
Government publishes its proposals for expanding the Scope of the Network and Information Systems Regulations 2018
Guide
FAQs for startups
Below are some of the questions we are regularly asked by startups, covering a range of topic areas.