Skip to main content
Share via Share via Share via Copy link

The AI will see you now: Liability issues from the use of AI in surgery and healthcare

01 December 2020

Technological advancements in the fields of robotics and artificial intelligence (AI) have accelerated exponentially. It is only a matter of time before we see such technology used much more widely in the health sector.

In this article, we explore the potential issues surrounding liability in clinical negligence cases involving the use of AI and robotics and consider where and with whom liability may lie. In looking to the future, we hope to answer these questions insofar as their answers lie within the current frameworks for clinical negligence cases, or else to pose questions that need grappling with.

AI and healthcare

In recent times there have been significant developments in robotics and AI. Both are slowly becoming more and more pervasive in our day-to-day lives. This can be seen through the embedded algorithms that learn from our online searches resulting in personalised adverts, Netflix recommendations for your next binge, and even vacuum cleaners that navigate around your pet dog or your child’s school bag.

Surgery has, to some extent, avoided this wave of AI so far but it is being introduced in healthcare generally as a tool used by clinicians. For example, AI and machine learning are now being used in drug development, to analyse medical records or big data, to automate administrative tasks, to eavesdrop on emergency calls, to interpret x-rays or other imaging and bloodwork (in some cases apparently out-performing human analysts), in pattern recognition such as diagnosis of skin cancers, to suggest personalised regimes for disease management including cancer and to monitor individual patients who are at risk of deterioration.  Babylon and other platforms in the health market  have used AI-powered Chatbots or algorithms to stream their callers. There are other existing examples of virtual nursing able to monitor patients 24/7, to provide quick answers and communicate between healthcare providers and patients. Whilst technology has not yet replaced the need for microscopes or a highly trained ‘mark-1’ human eyeball, large pathology labs now resemble futuristic Yo-Sushi tracks on a giant 3 dimensional scale.

Some have forecast that surgeons will become obsolete in the future as human fallibility will be replaced with an AI robotic alternative. Whilst this, at least in the immediate future, is just speculation, robotics are already being used as tools in surgeries in the UK. Significant reductions in in-patients stays, post-operative pain or other surgery related complications for robot assisted surgery have been suggested compared to human surgeons operating alone. According to the Mayo Clinic, robots help doctors perform complex procedures with a precision, flexibility and control that goes beyond human capabilities.

Currently, the use of robotics in UK operating theatres is mostly limited (although some simple examples have existed for 30 years). Minimally invasive surgery is an obvious example of the benefits from robots with cameras, mechanical arms and micro-surgical instruments. Robots have been utilised in hair transplants using follicular unit extraction (FUE). Surgeons at Guy’s & St Thomas’ and elsewhere are also making use of the Da Vinci robotic system which assists with controlling instruments from a console at a distance with adjustable magnification and dexterity. It has been reported that a Heartlander miniature robot is able to enter the patient via a small incision in the chest to perform mapping and therapy over the surface of the heart.

As technology develops and healthcare needs increase, healthcare providers will likely face increased pressure to use AI led technology in surgery too. This may come from both a cost perspective and waiting list pressures. However, the use of such technology will, also hopefully, allow for more accurate and less invasive surgery which, theoretically, could help reduce surgical error and thus potential claims against healthcare providers.

Liability issues with greater reliance on AI in surgery

There are significant ethical, legal, economic and technological challenges that need to be addressed before there is a more widespread introduction of AI led technology in the health sector. This is particularly the case where such technology has the ability to undertake surgery or treatment decisions autonomously or semi-autonomously.

Our focus in this article is on the legal issues arising out of the use of such technology. More specifically, we are looking at the liability issues when a patient experiences harm from a ‘negligent’ error in surgery involving AI technology. These errors might come from a multitude of sources including, but not limited to, data analytics, accessing the wrong patient records, poor manufacturing quality, latent defects and operator error.

Is the current legal framework fit for purpose in cases where negligent surgery is caused by an AI-led technology? Broadly speaking, for a traditional clinical negligence claim to be successful the claimant will need to prove that:

  1. the healthcare provider owed a duty of care in law to the claimant;
  2. there was a breach of that duty;
  3. the breach of duty has caused harm; and
  4. damage or other losses have resulted from that harm.

The second and third hurdles will require re-examination in a claim with AI technology at its heart. For example, whether there was a breach of duty or not is addressed by a legal test that has changed relatively little in the last 6 decades. Put simply, it will not be a breach of duty if the treatment would be accepted by a responsible body of medical practitioners in the relevant field, even if not a majority body, provided the treatment also stands up to a logical analysis of risks and benefits. Will that ‘Bolam’ test suffice (if so, are we talking about responsible bodies of human surgeons, comparison with data from other AI, or a hybrid comparator?) or will a totally new test be required?

For fully autonomous AI, healthcare providers may argue that they should not foot the bill for such claims and that liability should ultimately fall on the supplier of the technology which caused the harm.

If the technology is proven to be defective, then the normal principles of product liability may well apply without the need to prove negligence. Under the Consumer Protection Act a patient (or healthcare provider) could recover damages from a manufacturer/supplier simply by proving that the product was defective. Doubtless suppliers’ lawyers will consider this in the contractual indemnity documentation at the point of supply. Health providers or individual doctors will want to check whether product liability is excluded from their own medmal insurance (as it often is) or the indemnity level is adequate, and insurers too will want to consider their potential exposure/premiums. Product liability litigation is expensive, often involving group actions. Such litigation in the context of PiP breast implants, for example, led to some manufacturer/suppliers or healthcare providers going into insolvency and some insurers being unwilling to underwrite medmal policies in the UK.

Much may depend on the nature of the AI used and its ability to act autonomously or semi-autonomously. Due to the complexities of AI, including the coding language and the various formulaic algorithms on which it is built, the courts may deem it unfair to hold a healthcare provider liable for their AI where the ‘mistakes’ made by the AI are beyond the control or correction of the surgeon. Where the harm has been caused not by a defect per se, but by the AI itself in the way it has learned from past ‘experience’, can the healthcare provider argue that the AI ‘brain’ is equivalent to a third-party clinician/specialist providing its own medical care independently, thereby removing the responsibility from the healthcare provider itself? How far such an argument would get remains to be seen, but it seems unlikely that either the courts or the state will allow such a lacuna to exist whereby injured patients are shut out from compensation (unless the AI is also independently wealthy from speculating on the stock markets!).

For instance, it may be argued that consent to the AI led treatment was rendered invalid or not Montgomery-compliant where material risks, alternatives or variants were not warned about and documented. Where a healthcare provider decides to delegate functions to AI, it will likely be argued that the duty of care was non-delegable or even that the AI is ‘akin to an employee’ for whom vicarious liability is owed by the ‘employer’. These would arguably be relatively modest extensions of existing case-law to meet a changing need.

Proving that an autonomous AI was defective and the effective cause of the damage may prove problematic; cross examination of witnesses at trial by a barrister in an 18th century wig may no longer be the trump card it once seemed! Lawyers will need to develop new skillsets/tools to interrogate the data and a new breed of expert witness will probably be required to assist the court.

Whilst the above opens up some interesting questions and considerations, AI in its current form, at least, is seldom entirely autonomous. It seems likely that surgical AI, at least that which will be used in the very near future, will require input of data from a surgeon or technician. There will, therefore, no doubt be a blurring of the lines.

In cases involving negligence where AI has been used, the court will likely need to undertake a detailed analysis of where the error arose and at whose ‘hands’. It will be relatively simple where there has been operator error by means of a failure to programme, set up, maintain or otherwise use the equipment properly, or else a ‘human’ surgeon was also involved and ought to have intervened. As the AI technology advances, this may also depend on whether the Court views the AI as, effectively, just another tool used by the surgical team, or alternatively as another form of clinician providing treatment on its own accord.

Ultimately, legal liability will likely be determined on a case by case basis. The courts will be looking at how involved the surgeon was in the decision-making process of the AI technology (including the decision to use it for this particular surgery at all). If there is a clear technological defect then in principle liability will likely rest with the supplier of the technology. However, if there is an element of human error too then liability may need to be apportioned accordingly. Regardless, it is likely that healthcare providers using such technology including the NHS will be the first target for claimants seeking compensation, even if liability is eventually passed on to the technology supplier by way of indemnity.

The future

We can expect AI to cause future disruption in both the technological implications for healthcare and in the legal framework governing liability. If one thing is as certain as death and taxation, that is probably litigation. The cases will come, particularly in the initial stages where training or new ways of work must be implemented, and hopefully then reducing as we see the promise of reduction in harm being realised. The cost of investigating claims may increase and new protocols will likely be agreed. It is likely that we will see a greater overlap between commercial law, insurance and clinical negligence when determining liability in claims brought against healthcare providers who use AI technology in their practice. As ever, a bit of prevention will be better than cure, and all those in this sphere need to be thinking in joined up fashion now.

For further information relating to clinical liability issues arising from the use of AI please contact us.

Contact

Contact

Mark Hickson

Head of Business Development

onlineteaminbox@brownejacobson.com

+44 (0)370 270 6000

View profile
Can we help you? Contact Mark

You may be interested in...