• AU
Choose your location?
  • Global Global
  • Australian flag Australia
  • French flag France
  • German flag Germany
  • Irish flag Ireland
  • Italian flag Italy
  • Polish flag Poland
  • Qatar flag Qatar
  • Spanish flag Spain
  • UAE flag UAE
  • UK flag UK

UK Government launches new ‘AI Playbook’

07 March 2025

In this article we provide an overview of the UK Government’s Artificial Intelligence Playbook (“AI Playbook”), published in February 2025. The AI Playbook replaces the Generative AI Framework for HMG, published in January 2024, and seeks to guide the public sector on harnessing "the power of a wider range of AI technologies safely, effectively and responsibly".  

What are the AI Playbook's aims?

The AI Playbook builds on both the 2021 National AI Strategy, which sets out a 10-year vision to make the UK a global AI superpower, and the 2023 white paper 'A pro-innovation approach to AI regulation', which sets out the UK Government’s proposals for implementing and regulating AI. The AI Playbook provides comprehensive guidance and support on safe and effective use of artificial intelligence in the public sector by setting out 10 core principles.

What are the ten principles?

The AI Playbook establishes 10 principles to guide the "safe, responsible and effective use of artificial intelligence in governmental organisations," which are:

Principle 1: You know what AI is and what its limitations are

The results AI can produce are dependent on the tools and information provided. There are also limitations to its use, such as bias, and there is no guarantee as to the accuracy of outputs. Consequently, a good understanding of what AI is will be paramount to successful use. The AI Playbook provides information on understanding AI, its limitations and the various branches such as machine learning and generative AI.

Principle 2:  You use AI lawfully, ethically and responsibly

The use of AI will bring about compliance, legal, and ethical considerations. As such, the AI Playbook directs governmental organisation to seek legal advice on the use and development of AI systems early on, particularly in relation to data protection, in order to consider issues which may cause harmful effects if not addressed. Equalities implications, fairness and biases should be taken into consideration during the development stage to enable the production of ethical outputs.

Principle 3: You know how to use AI securely

There are security risks to consider when using AI, which can be minimised by building in safeguards and technical controls. Governmental organisations must ensure that any AI developed is secure, safe and resilient to cyber attacks and complies with: the UK Government's Cyber Security Strategy; the Secure by Design principles; and the UK Government's Cyber Security Standard.

Principle 4: You have meaningful human control at the right stages

Governmental organisations need to monitor AI behaviour and ensure that systems are in place to deter negative uses. The AI Playbook highlights the importance of implementing human intervention, especially with high-risk decisions influenced by AI, and recommends undertaking full product testing before deployment and implementing regular checks of the AI during deployment.

Principle 5: You understand how to manage the full AI life cycle

AI will have a full product lifecycle, just like any other piece of technology, which will include set up and development; updates to systems; and resources for day to day management. Governmental organisations should monitor the AI and implement mitigations to address potential drift and bias. Robust testing is recommended to catch these problems.

Principle 6: You use the right tool for the job

There is a myriad of AI solutions available in the market and selecting the correct tool for different tasks can be crucial in harnessing the benefits of AI. In the public sector, use of AI solutions are encouraged where "they can allow organisations to develop new or faster approaches to the delivery of public services, can provide a springboard for more creative and innovative thinking about policy and public sector problems, and help your team with time-consuming tasks." Organisations should know the limits of AI solutions and be aware that such solutions may not always be the most effective tool for the job.

Principle 7: You are open and collaborative

The AI Playbook encourages governmental organisations to be transparent about the use of AI solutions and engage with each other to address similar issues, as well as share ideas, code and infrastructure.  The AI Playbook also recommends, where possible, engagement with wider societal groups including communities, non-governmental organisations, academic groups and public representative organisations. Collaborating with these groups will assist in ensuring the AI solution being developed will produce tangible benefits to individuals and society as a whole.

Principle 8: You work with commercial colleagues from the start

The AI Playbook advises reaching out to commercial colleagues early on in order to understand how to use AI from a commercial perspective and ensure the ethical use of AI across all systems, particularly in relation to using AI developed by a third party.

Principle 9: You have the skills and expertise needed to implement and use AI solutions

Governmental organisations that use AI solutions should understand, and have in place, the technical and ethical requirements that underpin them. Everyone involved in the use of AI solutions, from decisions makers and policy makers to senior responsible owners, should develop the skills needed to "understand the risks and opportunities of AI, including its potential impact on organisational culture, governance, ethics and strategy."

Principle 10: You use these principles alongside your organisation’s policies and have the right assurance in place

Government organisations are reminded to adhere to their own policies and structures, particularly in relation to cyber security and data protection, and they must also regularly review AI practices. The use of AI should be mapped out by documented reviews and escalations processes, and setting up an AI review board or programme-level board is encouraged.  

Concluding comments on practical considerations

It is important for governmental organisations to develop a clear AI strategy from the outset and AI implementation should meet the AI Playbook principles, as well as align with the policies of each organisation. The success of any AI implementation is dependent on regulatory and legal compliance, with specific consideration of data protection and privacy laws needed. Organisations will need to regularly evaluate and update the performance of implemented AI, paying close attention to human intervention where required. It is crucial for organisations to ensure ethical use of AI, including transparency and consideration of potential biases to avoid discrimination. 

The Playbook is a pivotal resource for the public sector, providing a robust framework with key considerations to work towards success, and its publication signifies the UK Government’s intention to encourage and support AI use to drive growth in the public sector and beyond. 

With assistance from Rima Legah, who is a trainee in our Commercial Team.

Further Reading