Skip to main content
Share via Share via Share via Copy link

How various international jurisdictions are approaching AI regulation

12 December 2024
Gerard Hanratty

This article was first published on the Healthcare World Magazine.

The AI hype train left the station a long time ago, but governments are finally jumping on board with ideas on regulatory checks for safe development.

While the technology moves along the track at supersonic speed, countries are attempting to stay apace by striking a delicate balance between promoting innovation and ensuring accountability.

Establishing clear ethical guidelines and standards for AI development and deployment, underpinned by rigorous testing and evaluation, must be a priority in any regulatory framework.

Key issues to address include data privacy, bias and transparency. Mechanisms should be in place to hold AI developers and users accountable for any harm caused by these systems.

By regulating in a responsible and thoughtful manner, we can harness the technology’s potential while minimising its risks.

Countries including the US and UK have begun laying the groundwork for AI regulation but in the absence of comprehensive federal or national laws, they may look to other jurisdictions for key learnings.

EU’s approach to AI regulation

The EU has taken a risk-based approach, with different levels of regulation depending on the level of risk posed by the AI system.

Its AI Act, approved in March, would establish the world’s first legal regulatory framework for AI, including both mandatory requirements and voluntary codes of conduct.

The legislation means the EU has created a ‘black letter’ law approach to regulating AI, including ethical guidelines for its development and use, safeguards on general use and purpose, and transparency and accountability for AI systems.

However, detractors have highlighted loopholes for law enforcement, opt-outs for developers and gaps in penalties for the most dangerous AI systems.

Ireland and UAE’s approach to AI

AI strategies adopted by individual countries also provide useful examples. The Irish government’s National AI Strategy, for example, is a policy document that outlines its vision for the development and use of AI, taking into account its EU membership.

Three core principles underpin the strategy to embrace its opportunities – adopting a human-centric approach to AI application; staying open and adaptable to new innovation; and developing strong governance to build trust and confidence for innovation to flourish.

The strategy aims to position Ireland as a global AI leader by promoting research and innovation, developing AI skills and talent, and establishing ethical and trustworthy AI practices.

The UAE has also taken a proactive approach to AI with an adaptable regulatory strategy that aims to flex to new developments in the technology.

Among its key features are a “regulatory sandbox” that allows companies to test new AI products and services in a controlled environment, and a certification programme to provide companies with a way to demonstrate their AI systems meet certain standards.

Future regulation in US and UK

The US federal government took its first steps towards regulation with the National AI Initiative Act 2020, signed into law to co-ordinate AI research and policy as part of its defence strategy.

It created an AI advisory committee, and supports the development of ethical AI that is trustworthy, respects privacy and upholds civil liberties.

Various frameworks and guidelines have also been developed, including from a White House Blueprint for an AI Bill of Rights asserting equitable access and use of AI systems, which should pave the way for future legislation.

In the UK, a Centre for Data Ethics and Innovation was established to deliver ethical guidelines, and an AI Council to advise the government on its AI policy and strategy.

An AI regulation white paper was published in March 2023 and the previous Conservative government published its response to a consultation in February this year. While the new Labour government has specifically pledged to regulate developers of the most powerful AI models, it has yet to introduce an AI Bill.

However, as members of the G7, the US and UK co-signed the Hiroshima Process International Code of Conduct for Organisations Developing Advanced AI Systems, which lists in detail the actions that AI developers must abide by. Its most specific areas of focus relate to identifying risks.

Key contact

Key contact

Gerard Hanratty

Partner

gerard.hanratty@brownejacobson.com

+44 (0)330 045 2159

View profile
Can we help you? Contact Gerard

You may be interested in...