Editorial

Generative AI: A world of possibilities for the public sector

Deepak Shukla, Data and AI GTM Lead at Amazon Web Services (AWS), explores the opportunities for Generative AI in the public sector – and what it needs to do to make it happen.

Posted 30 November 2023 by Christine Horton


Everyone’s talking about Generative AI. What kind of work has AWS been doing in this area?

Generative AI (Gen AI) is on a par with some of the biggest technological innovations in history and is destined to change the way we run businesses, work, and deliver services to our customers. But Gen AI is not something new for us at Amazon; we’ve been using this technology for many years. If you’re using search on Amazon.com, it’s powered by large language models (LLMs), which is the foundation for generative AI application. Similarly, Amazon Alexa has been using generative AI to continuously train through millions of conversations.

But what has changed recently is this technology which was sitting within large tech companies like Amazon and Netflix, is now becoming more accessible for smaller businesses and organisations battling with limited technology talent. Organisations could now see a path for adoption of these technologies and build new services and applications to enhance their service, deliver better value to their customers and improve efficiencies.

What are some of the possibilities that you see around Generative AI in terms of government and public sector applications?

We’ve seen generative AI making an impact in three broad areas. The first is enhancing citizen experiences through applications such as Gen AI powered chatbots, virtual assistants and hyper-personalisation. The second is boosting employee productivity and creativity through adoption of conversational search, summarisation, and content creation. Thirdly its optimising business processes, leveraging data augmentation and document processing applications.

Out of these three, I’ve seen the public sector focusing on boosting employee productivity and exploring ideas on how to make civil service more efficient to relieve the burden and deliver more with its existing workforce.

In terms of possibilities for public sector, healthcare will benefit the most from this. Within healthcare one of the most promising use cases of Gen AI is to accelerate drug discovery and research by using models to create novel protein sequences with specific properties for design of antibodies, enzymes, and vaccines, as well as gene therapy. GenAI could be used to create synthetic patient and healthcare data, which can be useful for simulating clinical trials or studying rare diseases without access to large real-world datasets.

National Security agencies also generate and consume tons of data in a variety of forms including text, cyber, geospatial, video, imagery, and signals. The most obvious use case is summarising and asking questions on specific textual data – synthesising or corroborating other analyses conducted. Gen AI can be used to create new images or even 3D environments to aid in training or exercise preparations.

So, the art of the possible from Gen AI is immense with numerous use cases out there already and others are in pipeline. The key here is to ensure we have right data foundations and underlying infrastructure to scale these use cases.

What role does data play in the long term, sustainable value that can come from Generative AI initiatives?

It is important for organisations as they envision their Gen AI ambitions to have strong data foundations at the centre of their strategy.

There are several ways that organisations will leverage data in their generative AI applications. When you want to build Gen AI applications that are unique to your business needs, your organisation’s data is your differentiator. Every company has access to the same foundational models, but companies that will be successful in building Gen AI applications with real business value are those that will do so using their data.

Data is the difference between generic Gen AI applications and those that know your business and your customer deeply. Using data for Gen AI doesn’t mean that you have to build your own model. While some companies will build and train their own LLMs with vast amounts of data, many more will use their organisational data to fine-tune foundation models for their unique business needs or to add context to prompts through Retrieval Augmented Generation (RAG). Leveraging their proprietary data effectively will be the key for organisations to have long term success from their Gen AI initiatives.

We’re hearing a lot about responsible adoption of Generative AI. What does that mean in practical terms for organisations?

This is the area which needs most attention and being the key here is to be transparent about the risks and recognise the threats that will come – and are coming with AI.

We were talking of importance of data in all this earlier. We’ve seen examples where poor training data selection or biases within the data can perpetuate bad decision-making processes, potentially leading to some citizens being discriminated against. A lack of control over training data can also lead to abusive content being generated. Poor data, or even the deliberate manipulation of models when a specific trigger phrase appears, can lead to very convincing but factually inaccurate outputs we call hallucinations. Photo realistic videos and images, or convincing audio, created by bad actors and used to convince an audience of an event that simply didn’t happen. Or employees can unwittingly enter confidential data into public generative AI applications, only to find this data is used to train the model to the benefit of competitors.

All of these challenges can lead to the same issues we see with social media today; the deliberate or inadvertent weaponisation of algorithms to create divisions in society. At AWS, we always refer to a responsible AI framework with seven core pillars as we advise public sector bodies in shaping their AI governance. There are many attributes of a responsible AI programme to consider, almost all requiring human oversight with AI augmenting human judgement, not replacing it. A good programme goes beyond committee-based risk management by baking responsibility into your culture. For example, the use of teams with diverse backgrounds, perspectives, skills, and experiences when developing ML systems. If the makeup of your team does not reflect your citizen base – across dimensions such as gender, race, ethnicity, age, and politics, biases or toxic phrases will inevitably creep into the inferences your models make.

Along with driving adoption of AI across government departments, the government has a role to play in regulating and ensuring responsible adoption of technology and AI by the commercial sector. Like we see government regulate financial organisations through agencies like Financial Conduct Authority (FCA), the government will also have to establish a committee or organisation to regulate responsible use of data, technology, and AI across commercial sector. Through this governing body they could look at auditing and controlling what organisations can and cannot do.

What advice would you give public sector organisations weighing up the use of Generative AI?

Having engaged with various organisations across public sector, I see some organisations being sceptical of how this will support their mission. Given the early success we’ve already seen from this technology and industry forecasts I encourage leaders to innovate and identify high value use cases which AI could help deliver for them.

It’s important we don’t try to get this right first time and adopt mindset of experiment and learn. I’ve also seen some organisations adopting LLMs for problems they could easily solve with machine learning and deep learning algorithms, so ensuring we’re choosing the right cost effective solutions from variety of options available is important.

The Government should look at revisiting the national data strategy to support AI ambitions. Similarly, there is an opportunity for public sector bodies to build data strategy to support AI and generative AI applications. Organisations also need to assess their cloud strategy and underlying infrastructure to support generative AI workloads to ensure right controls are in place to manage the cloud consumption costs. Any organisation adopting AI needs to have right governance in place ensuring responsible adoption of this technology and skills programme to up-skill their existing workforce to work with AI-augmented tasks. In summary, I’ll advise organisations to have a holistic framework and strategy for AI adoption at scale.

Reimagine the UK public sector through collective innovation at AWS Public Sector Day 2024 .

Event Logo

If you are interested in this article, why not register to attend our Think Digital Government conference, where digital leaders tackle the most pressing issues facing government today.


Register Now