Editorial

Data: Escaping the Algorithm Prison

As algorithms increasingly shape decisions in justice, welfare and employment, a growing number of people are finding themselves trapped in an “algorithm prison.” Snowflake’s Fawad Qureshi provides advice to unlock it.

Posted 31 July 2025 by Christine Horton


In a world increasingly governed by automated decision-making, there is an accompanying risk that people could be denied opportunities or a chances for meaningful change. The “algorithm prison” is a self-perpetuating system of machine-made decisions that can lock people out of jobs, housing, benefits, and in some cases, justice.

“We’re talking about a self-fulfilling loop,” explained Fawad Qureshi, global field CTO at Snowflake. “AI is consuming itself. Decisions made by one algorithm become input for the next. So, if you were rejected once, you’re likely to be rejected again – because the system assumes it must’ve been right the first time.”

For example, thousands of positions go unfilled in the modern job market, yet applicants say they can’t get hired. “On one end, you’ve got AI writing and submitting thousands of tailored CVs a day. On the other, hiring managers use AI to filter and rank them. So, you end up with AI talking to AI, no humans in the loop. What happens then? There’s no feedback, no second chance, no appeal. You’re stuck,” said Qureshi. “It is AI Autophagy in action.”

This loop of automated judgments isn’t just inefficient – it’s dangerous. Especially when used in public services like welfare, immigration, and criminal justice. From biased algorithms in US courtrooms to AI-driven tax fraud detection scandals in the Netherlands, the consequences have already proven devastating.

To break free from the algorithm prison, Qureshi proposes five urgent policy actions.

1. Enact “Glass Box” Transparency Laws

“Transparency leads to trust,” said Qureshi. “We need to see inside the machine.”

Glass box laws mandate that any algorithm used in public services must be fully transparent, from source code to training data to logic flow.

“This isn’t about trade secrets anymore,” said Qureshi. “When AI is used to decide who gets welfare or who is labelled high-risk in the justice system, you can’t hide behind proprietary software. That’s not just unethical, it’s unacceptable.”

Qureshi points to the EU’s AI Act, where only low-stakes systems like spam filters are allowed to remain opaque. “The higher the stakes – justice, healthcare, immigration – the clearer the system must be.”

2. Legislate an Unbreakable Right to Human Appeal

Every person, said Qureshi, must have the legal right to challenge an automated decision, and get a real human response.

“Imagine you’re denied housing benefits or wrongly marked as a fraud risk. You ask why, and the answer is: ‘Computer says no.’ That’s dystopian. We need a formal process where a trained, accountable human can look at your case, understand the nuance, and override the system,” he said.

This isn’t a radical concept; it’s rooted in principles already embedded in data protection laws like GDPR’s right to explanation. But implementation remains patchy and inconsistent.

3. Build “Right to Re-evaluation” into System Design

“If an algorithm gives you a low trust score or flags you as high-risk, that shouldn’t be a life sentence,” said Qureshi. “Just like prison is supposed to be about rehabilitation, algorithms need to be designed with second chances built in.”

He cites examples from recruitment. Some firms let candidates reapply after six months. But many systems, especially in criminal justice or finance, keep punitive labels for years. “You make a mistake, serve your time, and the system still sees you as guilty forever. That’s an algorithm prison.”

To fix this, every automated system should include mandatory sunset clauses or re-evaluation triggers to enable people to demonstrate change and move forward.

4. Mandate a “Human-in-Command” for Critical Decisions

High-stakes decisions like sentencing, parole, or benefit cuts should never be fully automated.

“AI should not make decisions; it should make recommendations,” said Qureshi. “There needs to be a named human official, legally accountable, with the power and training to interpret, challenge, and ultimately decide.”

Recent events prove why this matters. An infamous Dutch tax scandal saw thousands wrongly accused of fraud by an automated system. Families were ruined, and trust in government collapsed.

“If a human had looked at just a few of those cases, the injustice could’ve been stopped. But the system was on autopilot,” said Qureshi.

5. Define and Prohibit Algorithmic Use in “No-Go Zones”

Lastly, some areas are simply too sensitive for automated decision-making. “We need to define ‘no-go zones’ for AI,” said Qureshi. “Should an algorithm ever decide to remove a child from their parents? Or determine someone’s prison sentence? Absolutely not.

“Yet now we’re handing critical life decisions to machines that can’t understand sarcasm, let alone context.”

He references the case of Brisha Borden and Vernon Prater in Florida in the United States. Borden, a young Black woman, was flagged as high risk by an algorithm after a petty theft. Prater, a white man with a history of armed robbery, was labelled minimal risk. Two years later, Borden hadn’t reoffended. Prater was in prison again.

“The algorithm wasn’t just wrong – it was racially biased,” said Qureshi. “And yet it made the call.

“We have to stop pretending that machines are neutral. They reflect our data, our biases, our blind spots. The goal should never be to remove humans from the loop. It should be to support them, with technology that is accountable, explainable, and, above all, human.

“We can’t build a world which is devoid of second chances. To err is to human, we need to make sure that error doesn’t become a life sentence.”

Event Logo

If you are interested in this article, why not register to attend our Think Data for Government conference, where digital leaders tackle the most pressing issues facing government today.


Register Now