Where do you see government organisations currently in their AI journeys?
The AI journey in government organisations, particularly concerning generative AI, is at a pivotal juncture. Across the public sector, departments are actively embracing innovation through strategic experimentation and thoughtful exploration of AI capabilities. The enthusiasm for understanding and leveraging AI technologies is evident in the numerous proof of concepts (PoCs) being developed and tested across different agencies.

Early success stories are emerging as some government departments move their AI projects from test environments (PoCs) to real-world operations. These early adopters are paving the way, demonstrating the practical applications and benefits of AI in government. However, even though many departments are actively experimenting with AI, most are still working on scaling their pilot projects into full production systems.
While organisations are rightfully taking a measured approach to generative AI implementation, particularly in sensitive areas, building the right foundation for widespread adoption is now a big focus for organisations. The growing availability of secure, scalable AI tools, combined with increasing practical experience and successful use cases, positions the public sector for accelerated AI adoption in the coming years. The focus is now shifting from “if” to “how” AI can be best leveraged to enhance public services and operational efficiency.
UK public sector organisations have made great progress with AI, and many departments are already seeing real benefits from their work. As more success stories come to light and teams share what they’ve learned, other departments can follow their lead with confidence. The next few years look promising as teams build on what they’ve started and grow their AI projects, leading to better and smarter public services. With better AI tools becoming available and growing knowledge within government teams, the future of AI in the public sector is looking bright and full of opportunity.
What challenges do they face?
Government organisations are working through several challenges as they navigate the complexities of operationalising AI at scale. The main challenge is learning how to manage the risks of using generative AI effectively. Organisations are wrestling with the intricacies of establishing robust operational frameworks that align with their vision and values, whilst ensuring that the decisions and responses generated by AI systems are appropriate and acceptable.
The rapid pace of technological advancement in the AI field presents another hurdle. Public sector bodies are striving to keep up to date with the latest developments, such releases of new large language models (LLMs) and the emergence of agentic AI. This constant evolution necessitates an adaptive approach to AI implementation. Organisations are navigating the delicate balance of managing AI-generated content, ensuring that it maintains the appropriate tone, messaging, and factual accuracy. They’re also carefully choosing which AI models work best for different tasks while keeping costs in check – something particularly important when working with public funds.
Organisations are learning to implement continuous improvement processes that can harness the potential of the latest models and technological advancements. This requires not only technical expertise but also a cultural shift towards embracing iterative development and ongoing optimisation. As public sector bodies continue to explore and expand their use of AI, they also contend with the ethical implications and regulatory requirements surrounding AI deployment in government services. This multifaceted challenge demands a holistic approach that combines technical proficiency, strategic foresight, and a commitment to responsible AI use in service of the public good.
What internal considerations do organisations need to think about?
As organisations progress in their AI journey, it’s crucial to establish effective mechanisms for sharing lessons learnt across teams. Given that most public sector departments have now experimented with AI to some degree and many are still transitioning from PoC to production, nonetheless there’s a wealth of valuable experience to be shared. This knowledge exchange is vital for overcoming common hurdles and accelerating AI adoption throughout the organisation. Moreover, it’s essential to focus on building reusable assets that can benefit the entire organisation in key areas such as security, governance, monitoring, model selection, operations, and responsible AI.
If you liked this content…
Do you have any practical steps or advice for how can organisations do generative AI at scale?
To effectively implement generative AI at scale, organisations should adopt a two-pronged approach, focusing on both development team practices and organisational strategies. At the development level, teams should be encouraged to experiment swiftly and frequently. With services like Amazon Bedrock, which provides API access to a variety of Large Language Models (LLMs), the barrier to entry for AI experimentation has significantly lowered. Teams should take advantage of this accessibility to compare different LLMs, identifying the most suitable and cost-effective options for their specific use cases. It’s crucial that developers incorporate techniques to enhance accuracy and efficiency, such as prompt engineering, retrieval augmented generation (RAG), few-shot learning, and fine-tuning. These methods can significantly improve the performance and reliability of AI solutions.
At the organisational level, several key frameworks and processes need to be established. A robust monitoring system should be defined, outlining the tools and metrics to be used in overseeing AI applications. Also, a comprehensive responsible AI framework should be developed, clearly articulating how the organisation will ensure all AI solutions adhere to ethical standards and organisational values. This framework should specify the tools and mechanisms to be employed in maintaining responsible AI practices. Furthermore, organisations should proactively define a framework for data access and usage in AI based solutions.
By establishing these guidelines early on, development teams can confidently validate their proposed solutions against approved processes, understanding which data sets they can utilise and which models are sanctioned. This approach not only streamlines the development process but also instils confidence that solutions developed within these parameters will be viable for production deployment at a later stage.
By implementing these practical steps, public sector organisations can create an environment that fosters innovation while maintaining necessary controls and standards. This balanced approach will enable them to harness the power of generative AI effectively and responsibly, ensuring that AI solutions are not only technologically advanced but also aligned with organisational objectives and public sector responsibilities.
Do you have any examples of this in the public sector?
A compelling example of generative AI implementation in the UK public sector comes from the Central Digital and Data Office (CDDO), now part of Government Digital Service (GDS). Last year, the CDDO collaborated with AWS’s Prototyping team to develop a generative AI proof of concept aimed at automating the handling of employee queries by the internal human resources team. This project was executed over an eight-week period, with the dual objectives of validating the use case and demonstrating potential enhancements to existing processes.
The project adhered to AWS’ approach to innovation, utilising the ‘working backwards’ methodology. This customer-centric strategy begins by defining the desired customer experience and then works backwards to determine the necessary steps to achieve that goal. This approach ensures clarity of thought regarding the solution to be built and maintains a strong focus on end-user needs throughout the development process.
The team actively involved end users during the PoC stage, incorporating their feedback at each sprint. This user-centric approach not only enhanced the solution’s relevance and usability but also fostered a sense of ownership among potential users. The success of this approach is evidenced by the recent deployment of the solution to production, now serving multiple teams within the Cabinet Office and the reduction of employees’ queries to human resources teams allowing them to focus on questions that could only be answered by an expert.
This case study exemplifies how public sector organisations can effectively leverage generative AI to improve internal processes and service delivery. It highlights the importance of rapid prototyping, user involvement, and a methodical approach to innovation. The CDDO’s experience serves as a valuable blueprint for other government departments looking to explore and implement AI solutions, demonstrating that with the right approach, significant improvements can be achieved in a relatively short timeframe.





