The promise of agentic AI – autonomous systems capable of navigating complex services, like the GOV.UK Agentic AI Companion – relies on seamless, programmatic access to vast amounts of high-quality, interoperable data. The current reality, however, falls short:

Non-Interoperability: The UK’s State of Digital Government Review indicates that 70 percent of the public sector data landscape is neither coordinated nor interoperable. Systems cannot communicate, creating siloed data lakes that agents cannot bridge.
Deployment Blockers: Global research supports this diagnosis. McKinsey’s 2025 State of AI report highlights that for the majority of organisations experimenting with agents, the most commonly cited blocker to achieving scaled deployment (stalling adoption at roughly 10 percent) is data infrastructure, not policy or risk aversion. Similarly, US federal agencies running pilots have a reported stall rate of 92 percent, with initiatives that never move beyond proof of concept into production.
This is not a policy failure; it’s a foundation problem. Governance frameworks presuppose data readiness that often simply doesn’t exist.
The Five Rungs of Agent-Ready Data
For data to be truly “agent-ready,” it must meet increasingly stringent utility standards. The journey from static data to autonomous utility can be visualised as a ladder with five essential rungs.
If you liked this content…
Most organisations are currently stalled between Rungs 1 and 2, and the jump to Rung 3 is where most agentic pilots fail.
| Rung | Description | Agent’s Functional Need | Current Challenge |
| 1: Catalogued | Data is discoverable, described, and locatable. | The agent must first know what datasets exist and where they reside. | Data is hidden in silos or undocumented, rendering it invisible to the agent. |
| 2: Quality-Assured | Explicit standards for accuracy, completeness, and timeliness are defined and measured. | The agent needs reliable data to guarantee the correctness of autonomous decisions. | Agents won’t pause to check for accuracy; they act on outdated or inconsistent information, leading to faulty advice. |
| 3: Accessible | Data is exposed via API-first, programmatic interfaces for machine-to-machine retrieval. | The agent must retrieve data in seconds, without requiring any human extraction or manual query. | Reliance on human-mediated extraction (forms, emails) makes data unavailable for autonomous action. |
| 4: Observable | Metadata enables detailed audit trails, provenance tracking, and decision tracing. | The agent’s actions must be reconstructible to understand which data were accessed, inferred, and acted upon. | Lack of provenance prevents accountability and makes it impossible to address citizen complaints about wrong advice. |
| 5: Interoperable | Shared data standards and semantics enable meaningful exchange across systems, departments, and organisations. | The agent must seamlessly join up data from multiple sources (e.g., housing, employment, and local residency). | The necessity for constant, bespoke translation between internal silos is unsustainable for cross-functional use cases. |
Unanswered Accountability Questions
The failure to establish this data foundation introduces critical, unresolved risks that governance frameworks must contend with:
- Accountability in Error: If a citizen receives incorrect advice because the underlying dataset was outdated (a Rung 2 failure), who holds the responsibility? The data owner, the agent deployment team, or the vendor? Current frameworks are ill-equipped for this nuanced accountability.
- Oversight vs. Autonomy: Agents are designed for autonomous action, yet democratic accountability demands human responsibility for decisions affecting citizens. Defining the boundary between necessary autonomy and mandatory oversight remains a significant challenge.
- Rollback Strategy: Organisations lack clear plans for discovery and remediation if an agent is found to have been dispensing incorrect advice for an extended period. How are the affected citizens traced, and how are the downstream consequences of those decisions undone?
Constructive Optimism
The work required to achieve agent-ready data, cataloguing, quality assurance, API strategies, and interoperability standards is not fundamentally new. Agentic AI simply raises the urgency and magnifies the cost of not completing the foundational data work that should have been undertaken years ago.
The necessary investments are the same ones we’ve long identified. The agents are not requiring a new kind of data infrastructure; they are demanding the successful implementation of the data strategies already on the books. By framing this challenge as one of accelerated foundation-building, we can move past pilot purgatory and lay the essential groundwork for responsible, scalable autonomous systems.








