Editorial

Could AI deliver on the broken promise of “many eyes” in cybersecurity?

A discussion at this week’s InfoBip Shift developer conference put the limits of human oversight under the spotlight.

Posted 17 September 2025 by Christine Horton


For years, the tech industry has relied on the “many eyes” principle: if lots of people review the same open source software, someone will spot vulnerabilities before attackers can exploit them. More eyes, in theory, meant more safety.

However, during a discussion at this week’s InfoBip Shift developer conference, the limits of human oversight were put under the spotlight.

“It turned out that many eyes didn’t make that many eyes,” explained AI and cybersecurity expert, Daniel Miessler (pictured, right).

While open source was supposed to be safer because “the whole world… can build open source,” the reality is that most packages receive little or no meaningful review, he added.

“There’s too many packages that are changing too fast, and there’s not enough people,” said Miessler. Even when humans do attempt review, the complexity of modern code makes it nearly impossible to catch everything.

So, the question as raised: if “many eyes” failed in practice, what can replace it?

The case for AI as scalable oversight

The response was that AI may finally make the “many eyes” idea possible – not by relying on armies of human reviewers, but by deploying machines.

Miessler suggested trying to manage 20,000 suppliers with a four-person vendor risk team – adding one more person won’t make a dent, he said. But AI systems can look at all 20,000 continuously, correlating data far beyond what people can handle.

Traditional approaches to security assurance – periodic audits, point-in-time code reviews, vendor questionnaires – also leave long blind spots, said Miessler. “You have to be constantly gathering… constantly reviewing.”

The solution he envisions is thousands of lightweight agents embedded across IT systems. These can feed into a central system that can answer real-time questions such as: Which suppliers have changed their security posture? What new vulnerabilities appeared in our code this week? Has any binary file been tampered with?

For business leaders, the shift is about moving from snapshots to streaming oversight. Just as finance teams monitor cash flow daily rather than waiting for an annual statement, security teams need live visibility rather than a compliance checklist once a year.

On the concern about AI hallucinations, Miessler said that right now, companies may only be reviewing “one percent of what we’re supposed to be looking at.” If AI enables coverage of 50 percent with some false positives, that is still a massive step forward.

He drew an analogy to self-driving cars: people mistrust them because they occasionally fail, yet their accident rate is already lower than humans. “We expect the level of perfection from AI that we absolutely want to expect and never get from humans.”

A supply chain problem, not just a software problem

Importantly, Miessler stressed that this is not just about open source software. Business software is usually composed of some open source, some proprietary, some commercial solutions, all assembled together. Once those components are bundled into a vendor’s application, organisations rarely know what’s inside.

That makes software security a supply-chain challenge. During the session, Sasa Zdjelar,chief trust officer at supply chain security company ReversingLabs (pictured, left), also noted: “It feels like the problem within open source… is deeply, deeply related to the problem we already have with software supply chain security. It’s just another component in the supply chain.”

Event Logo

If you are interested in this article, why not register to attend our Think Digital Identity and Cybersecurity for Government conference, where digital leaders tackle the most pressing issues facing government today.


Register Now