Project and product risk in artificial intelligence (AI) implementation is strongly tied to an organisation’s data maturity and the ability to set robust, focused project parameters.
Risk management professionals can be instrumental in supporting successful outcomes.
Analogue to digital
The shape of future adult and children services is an emerging picture, but the UK Government’s commitment to data driven services, automation and AI is clear, both from the AI Opportunity Action Plan and the 3 January 2025 announcement on social care.
With staff and capacity challenges, tight budgets and efficiency programmes, authorities responsible for care need technology. AI in care is accelerating, with ambitions to deliver projects at pace, using test and learn approaches, or rapid scaling of pilots.
Data foundations
Data fuels AI. Poor data drives poor outcomes, and AI may amplify data issues such as bias. The Care Data Matters roadmap of December 2023 set out to tackle this, but in February 2025, the Care Quality Commission identified that ‘data is used inconsistently’ in adult services. Analysis of assessment reports to date shows inconsistency on performance, equality, advocacy and outcomes, and on data sharing. In Wales, Alma Economics have found issues in data quality including completeness, and lack of automated checking.
Assessing data maturity at the outset of an AI project is crucial for success. This includes testing existing data quality and confidence in future data. Proposals to ‘fix’ data or significantly change data capture practices are difficult and slow to deliver and should be viewed with caution.
Off-the-shelf, closed source solutions require close attention. The decision- making model will not be transparent, and the data used to train the system may not be. This creates uncertainty over whether data used to train the system sufficiently reflects the demographic of local citizens or service users.
For example, where social workers are asked to make significant decisions based on a model they do not understand, questions of ethics arise. Such models also reduce the ability to deliver transparency or explain decisions to service users, families, or in a court setting if decisions are challenged.
‘Everythingism’
Reform thinktank describes ‘everythingism’ as: "the belief that every proposal, project or policy is a means for promoting every… objective, all at the same time" and as "the pathology holding back the state".
Everythingism refers to the (often false) hope that one solution can solve all the problems an organisation is facing. It is endemic in AI projects and a very significant risk.
To succeed, AI projects must have clearly defined and focused deliverables, based on service delivery priorities. Opportunities to add value or achieve additional benefits, while well intentioned, should be recognised as potential distractions which could derail projects. It is better to keep on mission for a focused first deployment, capturing other opportunities for later review and possible development in future phases.
Soft risks: Politics, trust and engagement
Data collection and use is contentious. Citizen trust in public bodies is low, and the trust deficit is often greater in marginalised communities, which already creates barriers for social care engagement.
The 'AI Opportunities Action Plan' seeks to support safe and trusted AI development and adoption through regulation, safety and assurance. This is crucial, but do not displace the need for each organisation to consider trust and engagement in every AI project.
Projects can be derailed by data discomfort among the public, which can flare into a political issue. For projects which weather this, product performance will be significantly reduced if data discomfort leads to reduced engagement, such as refusal to provide data or requests for data deletion.
Effective strategies to engage those directly affected by an AI project, and the wider body of citizens are crucial to successful use of technology in social care.
The sections above emphasise the need to start with:
- An understanding of current data maturity.
- Clear, focused project deliverables.
These lead to a key question: What technology does the project need?
GenAI is not necessarily needed, and may be unsuitable, for many data driven projects in social care. Analytical AI or rules-based automation may be more appropriate. Alternatively, an AI tool may have a defined role within a project of wider scope.
AI and automation are simply tools (though powerful ones) that can be deployed in service improvement projects. Success and risk will depend on ensuring they are only used where they are the right tool for the job.
Risk management in AI projects
Pressure on social care systems, promises of AI advocates, competing priorities, and data and technology maturity levels mean data and AI projects are noisy. From ‘AI optimists’ to ‘data defeatists’ every stakeholder is likely to have a different insight and views on opportunity and risk. This, itself, can impact project delivery.
Think through a carefully designed risk checklist, including:
- Deliverables.
- Data maturity.
- Product knowledge and choice.
- Data management.
- Stakeholder management.
Risk managers can take a critical role in objectively assessing ‘do nothing’ project and product risks to bring clarity to the risk landscape around any AI deployment, supporting informed decisions and successful outcomes.
Contact

James Arrowsmith
Partner
james.arrowsmith@brownejacobson.com
+44 (0) 330 045 2321
Discover more
You may be interested in
Online Event - Shared Insights
Shared Insights safeguarding forum: Focus on self-neglect
Legal Update
Navigating the complexities of deprivation of liberty for children under 16
Legal Update
The age assessment process for local authorities
In Person Event
Claims Club, London
Legal Update
LGBTQ+ inclusion in adult social care: Addressing inequalities and driving change
Press Release
New reforms affecting child sexual abuse claims: Legal comment
Legal Update
Limitation reform in sexual abuse claims: How this affects sports clubs
Opinion
Employment Tribunal recognises foster carers' right to bring whistleblowing and discrimination claims
Legal Update
Important judgment for social work teams facing ‘failure to remove’ claims
Press Release
Government's announcement on child grooming inquiries: What local authorities need to know
Press Release
Comment on Older People's Housing Preferences report
Legal Update
The latest take on vicarious liability for kinship / connected person foster care
Opinion
Opportunities for private children’s services to engage with Regional Care Cooperatives
In Person Event
Claims Club, Birmingham
On-Demand
Kinship care: DJ v Barnsley Metropolitan Borough Council and SG
Legal Update
Vicarious liability for kinship foster carer abuse: Implications for local authority insurers
Legal Update
Court of Appeal extends local authority liability for kinship foster carers
Legal Update
Redress and reparation schemes: Where the complexity is not about quantum
Legal Update
UK election 2024: What are the healthcare promises made by the major political parties?
On-Demand - Shared Insights
Shared Insights: Giving evidence before the Family Court - a practical session
Legal Update
HXA and YXA : Guidance from the Supreme Court on negligence in failure to remove cases
Legal Update
The Supreme Court provides definitive guidance on the main housing duty, its performance and enforcement
Legal Update
ICO consultation on transparency in health and social care
Legal Update
CQC State of Care report 2022/23 – what does it say about mental health care and the Deprivation of Liberty Safeguards?
Legal Update
Work experience placements – commendable: have them, and be alive to the risk
Opinion
£600m funding unveiled to boost social care workforce
Press Release
Browne Jacobson announces appointment of Barrister to its Cardiff office
Press Release
Browne Jacobson successfully defend vicarious liability claim appeal in connected person foster care case
Legal Update
Advising on alternative treatments for patients – what is the legal test to be applied?
Opinion
NHS announces artificial intelligence fund
Legal Update
Vicarious liability – don’t overlook the importance of close connection
Legal Update
Supreme Court will hear Worcestershire case on local authority responsibility for Section 117 Aftercare in April 2023
Legal Update
HXA and YXA failure to remove cases: Key considerations in anticipation of the Supreme Court judgment
On-Demand
Environmental, social and governance (ESG) in the context of retirement living
Legal Update
LPS consultation and ‘go live’ planning
Published Article
Local authority ‘failure to remove’ duties: Up in the air
Legal Update
A brief summary of the Court of Appeal decision in HXA v Surrey County Council and YXA v Wolverhampton City Council
On 31 August 2022, the Court of Appeal handed down the Judgment in respect of the appeal case of HXA v Surrey County Council and YXA v Wolverhampton City Council [2022].
Legal Update
Liberty Protection Safeguards: points to note as consultation period continues
Deprivation of Liberty Safeguards was due to transition to Liberty Protection Safeguards in October 2020 but delayed due to the pandemic. While the public consultation has now closed and we’re still unclear of what the final legislation and code will look like, it’s worth noting and keeping a watching brief.
Opinion
“MARSIPAN” Guidelines replaced with the Medical Emergencies in Eating Disorders Guidance
The Royal College of Psychiatrists has produced updated guidance to help frontline staff and clinicians identify and treat patients with eating disorders before the illness becomes a medical emergency.