The UK Government has set out plans to regulate artificial intelligence (AI) with new guidelines.
The government has issued a whitepaper that proposes five principles as part of a blueprint “to drive responsible innovation and maintain public trust” in AI.

“As AI continues developing rapidly, questions have been raised about the future risks it could pose to people’s privacy, their human rights or their safety,” said a summary by the Department for Science, Innovation and Technology. “There are concerns about the fairness of using AI tools to make decisions which impact people’s lives, such as assessing the worthiness of loan or mortgage applications.”
Additionally, it said organisations are currently being held back from using AI to its full potential because “a patchwork of legal regimes causes confusion and financial and administrative burdens for businesses trying to comply with rules.”
As such, the new proposals “will help create the right environment for artificial intelligence to flourish safely in the UK.”
The principles are:
- Safety, security and robustness: applications of AI should function in a secure, safe and robust way where risks are carefully managed
- Transparency and explainability: organisations developing and deploying AI should be able to communicate when and how it is used and explain a system’s decision-making process in an appropriate level of detail that matches the risks posed by the use of AI
- Fairness: AI should be used in a way which complies with the UK’s existing laws, for example the Equality Act 2010 or UK GDPR, and must not discriminate against individuals or create unfair commercial outcomes
- Accountability and governance: measures are needed to ensure there is appropriate oversight of the way AI is being used and clear accountability for the outcomes
- Contestability and redress: people need to have clear routes to dispute harmful outcomes or decisions generated by AI
The government said it will avoid “heavy-handed legislation which could stifle innovation” and take an adaptable approach to regulating AI. Instead of giving responsibility for AI governance to a new single regulator, the government will empower existing regulators – such as the Health and Safety Executive, Equality and Human Rights Commission and Competition and Markets Authority – to come up with tailored, context-specific approaches that suit the way AI is actually being used in their sectors.
Over the next 12 months, regulators will issue guidance to organisations, as well as other tools and resources like risk assessment templates, to set out how to implement these principles in their sectors. When parliamentary time allows, legislation could be introduced to ensure regulators consider the principles consistently.
If you liked this content…
Supporting regulators will be critical
Ashley Williams, partner at law firm Mishcon de Reya, said the new AI whitepaper can be neatly summarised by the following statement: no new legislation, no new regulator.
“The proposal is to implement a framework underpinned by five principles – very similar to existing OECD principles – to guide and inform the responsible development and use of AI in all sectors of the economy,” said Williams.
“It identifies that major emerging technologies such as autonomous vehicles and Large Language Models (ChatGPT) are unlikely to be directly ‘caught’ within the remit of any single regulator and there is further work to be done to identify these gaps and address them.
“It also articulates the headache of allocating responsibility across existing supply chain actors within the AI life cycle and therefore proposes not to intervene at this stage. Contracts will need to continue to do the heavy lifting of allocating responsibility.
“For many, this will stand in stark contrast to the EU’s rule-based approach. The proposed UK approach has some upsides, such as flexibility balanced with a pragmatic approach, but several downsides, most notably the continuing lack of certainty.
“For the UK approach to really work, it is important to acknowledge that some regulators will be under-resourced and lack AI experience to really deliver. Others may be too heavy-handed in their approach without a clear steer on how they should implement the framework.
“Supporting regulators will be critical in making this approach workable and ensuring specific sector guidance is issued in a timely manner with real cooperation across the regulators. Regulators will be supported by a centralised function which will require substantive investment in terms of resource and expertise.”
The UK’s AI industry employs more than 50,000 people and contributed £3.7 billion to the economy last year. Britain is home to twice as many companies providing AI products and services as any other European country.








