Editorial

Q&A: Why AI is forcing governments to rethink digital sustainability

Seto Adenuga, AI Governance & Ethics Manager at Kainos, explains how real-world AI deployment in public services is exposing hidden operational, social and environmental trade-offs – and why governments must treat sustainability as a core design constraint, not a late-stage compliance exercise.

Posted 19 February 2026 by Christine Horton


As governments accelerate the adoption of AI across public services, questions of efficiency and automation are being joined by a more fundamental concern: long-term digital sustainability. From vendor dependency and governance burden to public trust and environmental impact, the realities of implementation are reshaping how policymakers think about responsible technology, says Seto Adenuga, AI Governance & Ethics Manager at Kainos.

How has the experience of implementing AI in public services reshaped the way governments think about digital sustainability as a core principle for future technologies?

Implementing AI in public services has exposed a gap between how governments talk about sustainability and how digital systems are actually designed and run. Early AI programmes were often framed around efficiency and automation with sustainability treated as an adjacent concern rather than a core design principle.

What has shifted thinking is the realisation that digital systems don’t just deliver services they shape behaviours, dependencies, and institutional capability over time. AI has forced governments to confront questions about long-term maintainability, vendor reliance, skills erosion as well as public trust. 

What unintended environmental, social or operational impacts of AI have emerged in the public sector, and how should those lessons inform how we approach future disruptive technologies?

Operationally, many public bodies underestimated the long-term cost of maintaining AI systems. Not just financially, but in terms of skills, governance effort and reliance on third parties. Socially, there have been cases where automated systems unintentionally reinforced inequality or reduced access to human decision-makers, particularly for vulnerable groups. Environmentally, compute and data demands were rarely factored into business cases.

The key lesson is that future technologies need to be assessed not just on whether they work, but on what they displace, who they affect over time, and what ongoing effort they require to operate responsibly.

If governments were designing their AI strategies today with sustainability as the primary lens, what would they do differently from the outset?

Seto Adenuga, AI Governance & Ethics Manager at Kainos

They would need to start by being much clearer about purpose and limits. Instead of asking where AI could be applied, they would need to ask where it should be applied and where it shouldn’t. They would need to assess whether AI is the appropriate solution for the use case, and if there are any alternative options to achieve the same outcome.

Sustainability-led strategies would also need to explicitly acknowledge trade-offs. In some contexts, sustainability might mean slower systems, lower accuracy, or higher upfront cost. In others, such as safety-critical services, accuracy and reliability would rightly take precedence, with sustainability defined differently.

Finally, strategies would need to focus more on lifecycle governance including monitoring, review, and exit. Designing for decommissioning is just as important as designing for innovation.

How can public services embed both responsibility and sustainability into digital systems from the earliest design stages, rather than treating them as compliance or policy add-ons?

The shift needs to happen at the decision-making level, not just the policy level. Responsibility and sustainability are embedded when teams are required to explain why a system is being built, who is accountable for it over time, and how impacts will be monitored once it is live. Lightweight impact assessments, clear escalation routes, and named ownership are often more effective than complex frameworks introduced too late.

Crucially, sustainability needs to be treated as a design constraint, not a reporting requirement. When it’s treated as a compliance add-on, it arrives too late to shape decisions and choices.

How does digital sustainability intersect with public trust, and why is long-term societal value becoming more important than short-term technological efficiency?

Public trust is inherently linked to sustainability. People are far more willing to accept digital systems when they believe those systems are understandable, accountable, and aligned with long-term public value.

Short-term efficiency gains mean little if they result in opaque decisions or reduced human oversight. AI has shown that trust is lost through a lack of transparency about how decisions are made and how harm is addressed.

Governments are increasingly recognising that long-term societal value, such as fairness, resilience, legitimacy, matters more than performance metrics. Sustainable systems are those the public can continue to trust, not just the ones that deliver quick wins.

What practical frameworks or decision-making models should governments put in place now to ensure future disruptive technologies are introduced in a way that is ethical, sustainable and resilient?

Rather than inventing entirely new structures, governments should focus on strengthening a few core capabilities:

  • Clear accountability models that assign ownership for decisions, not just delivery
  • Impact-based assessments that consider social, environmental and operational effects
  • Explicit trade-off evaluation to support informed decision-making
  • Continuous review mechanisms rather than one-off approvals

What should governments actually be measuring to understand whether AI and future technologies are delivering sustainable public value – not just performance or cost savings?

Performance and cost do matter but they are incomplete indicators. Governments should also be measuring:

  • Whether systems remain understandable and contestable over time
  • The extent of human oversight and meaningful intervention
  • Long-term dependency on vendors or proprietary technologies
  • Differential impacts on different groups in society
  • The ongoing cost of governance not just deployment

Sustainable public value is demonstrated when systems remain defensible, trusted, and adaptable, not just the technical benchmarks.

Event Logo

If you are interested in this article, why not register to attend our Think AI for Government conference, where digital leaders tackle the most pressing AI-related issues facing government today.


Register Now