Editorial

The invisible workforce: What you don’t know can hurt you

As organisations rush to deploy agentic AI tools, many are creating a fast-growing “invisible workforce” operating beyond traditional governance controls. In this Q&A, SailPoint’s David Tyrrell explains why AI agents are emerging as a new class of digital identity, how shadow AI is already creating security and compliance risks, and what organisations must do to regain visibility and control over non-human access to sensitive data.

Posted 14 May 2026 by Christine Horton


Agentic AI is on the lips of a lot of people right now. At the extreme end, an agent is a fully autonomous Large Language Model (LLM) running on a loop. While many of us use chatbots on our phones for advice, they don’t have access to the tools necessary to act on our behalf. With agentic technology inside your organisation, that changes. These agents are reasoning, acting, and considering the goals you’ve given them, and crucially, they have the ability to access your data, tools, and applications. We essentially have a new class of identity among us, which means we urgently need to start thinking about data governance.

In this Q&A, David Tyrrell, Principal Strategist at SailPoint, discusses the concept of the invisible workforce, the present-day reality of agentic AI, and how organisations can establish clear governance over non-human identities.

Given the access to data that some agents have, why is the cybersecurity risk so high?

Fundamentally, AI is driven by data. The productivity and competitive gains from agentic AI are very real and people have moved quickly to capture them, often before the necessary identity, security, and governance frameworks were in place. The technology simply outran the controls. Think about the data an agent has access to; a broad agent might have the exact same access as a human knowledge worker. Large language models are ultimately driven by the input we give them. If they receive instructions that run counter to our intentions, the security impact could be devastating.

We often hear about the difference between hype and hope with AI. Is the idea of ‘shadow AI’ a real, present-day threat, or is this more of a future-looking concern?

It is absolutely a present-day reality. In all of our organisations, there are curious people who are experimenting and downloading agents to supercharge their productivity. Some workers may be completely unaware of the broader risks and are using public chatbots for work-related issues, potentially uploading sensitive information.

To give you a concrete example: Claude Code; you can download it and run it as an agent on your laptop right now. With its “computer use” feature, it can see your screen, your apps, and potentially access all your files. There are people right now running autonomous agents on a loop to manage their inboxes or plan their days. This is not hype.

You’ve described AI as a ‘new form of identity’. How is an AI agent different from a traditional software application?

If you look at the anatomy of agentic AI: the brain is the LLM, the tools are its hands, and the harness controls its behaviour. What controls a human identity at work are organisational policies, training, and a general understanding of acceptable behaviour. An AI agent doesn’t have an innate understanding of this.

A traditional piece of software is something you launch and forget about. An agent is an identity that can run 24 hours a day; it has a memory, context, and interacts with humans. In identity security, we worry about who has access to what when they join, move, or leave a company. Now, we have to ask what processes and tools an agent has access to. If an agent doesn’t intuitively understand concepts like separation of duties, it might execute a restricted process simply because it has the technical access to do so.

Are there different types of AI agents we should be worried about?

Not all agents are created equal, and there is a real spectrum. On one end, you have professionally coded agents being built top-down in well-governed harnesses. On the other end, we are seeing the rapid rise of the personal agent. There are open-source and downloadable agents out there, and the barrier to entry is shrinking every month. You no longer need to be a tech guru to build or use one, which makes the tension between sanctioned, top-down agents and personal, unsanctioned agents a key issue.

What are the key takeaways and advice you’d give to organisations trying to manage this?

From a security and governance standpoint, you need to adopt a “crawl, walk, run” approach:

  • Crawl (Discovery): Find the agents in your organisation. Build a registry, establish clear ownership, and understand exactly what data they can access.
  • Walk (Governance): You need to wrap policies around these agents to control their data access. Understand the relationships between your human and non-human users, and govern them under a unified model.
  • Run (Enforcement): You must monitor agent behaviour. Because agents operate at a speed that is very difficult for humans to keep up with manually, you need enforcement mechanisms in place that can quickly involve security teams and shut things down in the event of a breach or anomaly.

Event Logo

If you are interested in this article, why not register to attend our Think AI for Government conference, where digital leaders tackle the most pressing AI-related issues facing government today.


Register Now