AI Regulation Gets £100M Boost from UK Government

The UK government has pledged over £100 million in new funding to support an agile and sector-specific approach to AI regulation, aiming to balance innovation and safety.

The funding includes £10 million to prepare and upskill regulators to address the risks and opportunities of AI across sectors like telecoms, healthcare, and education.

The announcement comes at a critical time, as a survey by Thoughtworks shows that 91% of British people want government regulations to do more to hold businesses accountable for their AI systems. The public also demands more transparency, with 82% of consumers preferring businesses that proactively communicate how they are regulating general AI.

UK’s Context-Based Approach to AI Regulation

In a government response published today to last year’s AI Regulation White Paper consultation, the UK outlined its context-based approach to AI regulation that empowers existing regulators to address AI risks in a targeted way, while avoiding rushed legislation that could stifle innovation.

The government also revealed its thinking on potential future binding requirements for developers building advanced AI systems, to ensure accountability for safety – a measure supported by 68% of the public.

The response stated that all key regulators will publish their approach to managing AI risks by 30 April, detailing their expertise and plans for the coming year. This aims to provide confidence to businesses and citizens in transparency. However, 30% of the public still do not think increased AI regulation is for their benefit, indicating skepticism remains.

UK’s Investment in Responsible AI Development

The government also announced nearly £90 million to launch nine new research hubs across the UK and a US partnership focused on responsible AI development. The research hubs will tackle challenges such as improving healthcare, enhancing online privacy, and reducing carbon emissions.

Separately, £2 million in funding will support projects defining responsible AI across sectors like policing – with 56% of the public wanting improved user education around AI.

AI Regulation

Tom Whittaker, Senior Associate at independent UK law firm Burges Salmon, said: “The technology industry will welcome the large financial investment by the UK government to support regulators continuing what many see as an agile and sector-specific approach to AI regulation.

“The UK government is trying to position itself as pro-innovation for AI generally and across multiple sectors. This is notable at a time when the EU is pushing ahead with its own significant AI legislation that the EU considers will boost trustworthy AI but which some consider a threat to innovation.”

UK’s Leadership in AI Safety and Development

Science Minister Michelle Donelan said the UK’s “innovative approach to AI regulation” has made it a leader in both AI safety and development. She said the agile, sector-specific approach allows the UK to “grip the risks immediately”, paving the way for it to reap AI’s benefits safely.

The wide-ranging funding and initiatives aim to cement the UK as a pioneer in safe AI innovation while assuaging public concerns. This builds on previous commitments like the £100 million AI Safety Institute to evaluate emerging models.

Greg Hanson, GVP and Head of Sales EMEA North at Informatica commented: “Undoubtedly, greater AI regulation is coming to the UK. And demand for this is escalating – especially considering half (52%) of UK businesses are already forging ahead with generative AI, above the global average of 45%.

“Yet with the adoption of AI, comes new challenges. Nearly all businesses in the UK that have adopted AI admit to having encountered roadblocks. 43% say AI governance is the main obstacle, closely followed by AI ethics (42%).”

Overall, the package of measures amounts to over £100 million of new funding towards the UK’s mission to lead on safe and responsible AI progress. This balances safely harnessing AI’s potential economic and societal benefits with a targeted approach to regulating very real risks.

Share: