Editorial

Binary Over Flesh: The Case for Ethically Architected Machine Learning

As artificial intelligence accelerates into every corner of modern life, Sherlock di Schiavi warns that an obsession with innovation is outpacing our ethical and architectural foundations, and argues that without security, accountability, and moral design we risk constructing a fragile digital civilisation on unstable ground.

Posted 5 November 2025 by Christine Horton


For the past three years I have been writing and speaking at conferences about the architecture of artificial intelligence systems, and with some regularity I have been accused of being overly concerned, of warning too loudly, of taking things too seriously. Yet concern, when justified, is not paranoia; it is foresight. I want to take a moment to explain why I am concerned, and why I believe that what many call innovation today may, in fact, be setting the stage for long-term systemic instability.

Let me begin with a confession: I am not anti-AI. In truth, I am a strong proponent of well-architected machine learning. I would even say that I prefer binary logic over human behaviour. Machines, at least in theory, follow the rules they are given. Humans, by contrast, are sticky, emotional, and far too unpredictable for my liking. They are driven by bias, ego, and sentiment, and those flaws have historically caused more harm than any algorithm ever has.

Mathematical, data-driven decision-making holds a certain purity. It is consistent. It can be tested. It can be proven wrong and corrected. Human decision-making, on the other hand, is riddled with cognitive distortions that go unchecked. We have seen this across every major system, from governance and economics to environmental policy and security design. Humans are the most dangerous variable in any equation.

So my criticism is not of artificial intelligence itself, but of how it is being built, deployed, and trusted. Governments and large organisations are rushing headlong into machine learning adoption with a religious zeal that borders on recklessness. In the race to appear modern, to digitise everything, the most fundamental questions of ethics and security have been left unanswered. The frameworks that should underpin AI, those that determine accountability, data integrity, and secure system design, are too often treated as afterthoughts.

The result is a growing infrastructure of systems that look intelligent but are, in truth, fragile. Their intelligence is statistical mimicry, not sentience. Their behaviour is a mirror of their training data, which is itself full of human prejudice and error. We are deploying fallible systems at a scale never before attempted, and we are doing so with an optimism that borders on negligence.

The near future is therefore predictable. We will see a surge of data breaches, model manipulations, and privacy violations. Attackers will not need to outsmart humans; they will simply exploit weaknesses in the AI supply chain: poisoned datasets, insecure APIs, unvalidated prompts, and poorly configured cloud services. These systems are being plugged directly into sensitive decision-making environments, from healthcare and finance to law enforcement and national infrastructure, without adequate threat modelling or resilience testing.

Every era of technological progress has brought its share of vulnerability, but what makes this different is the scale and opacity. In traditional systems, a breach exposes data. In machine learning systems, a breach can expose or alter the logic itself, the very mechanism of reasoning. Once that trust is lost, it cannot be easily rebuilt.

Beyond the immediate security risks lies the second, slower crisis: the collapse of meaningful occupational pathways. Machine learning systems are not just automating tasks; they are automating judgment. The professions that once relied on accumulated experience, law, medicine, journalism, even elements of engineering, are now facing encroachment from algorithms trained on their historical outputs. If that training data reflects systemic bias, then we are encoding our past errors as permanent features of our future.

The ethical dimension of this shift is profound. We risk creating an economy where efficiency trumps understanding, and where human capability is reduced to the management of machines rather than the pursuit of knowledge. We are told this will free us from repetitive labour. But what happens when the only tasks left are either too trivial or too complex for the average worker to perform? What happens when the middle ground, the space where most of society lives, disappears?

The truth is that the current trajectory of machine learning is neither sustainable nor secure. Without rigorous design principles grounded in ethics, mathematics, and cyber security, we are building a digital civilisation on sand. We need security by design, privacy by default, and auditability as a non-negotiable foundation. These principles must be built into every stage of the AI lifecycle, from model conception to retirement.

We also need to reintroduce humility into the discussion. AI is not magic; it is mathematics and probability, nothing more. It can extend human capability but it cannot replace human wisdom. The architects of these systems must recognise that technology is not morally neutral; it reflects the intent and ignorance of its makers.

The paradox of our time is that the same people who champion AI as the saviour of humanity are often the ones neglecting to secure it. They speak of responsible AI but fund systems with no formal verification, no adversarial testing, and no ethical oversight. It is as if we have learned nothing from the last fifty years of computing history, from the worms, breaches, and failures that taught us to design with caution.

In the end, my so-called concern is not about technology itself but about the culture surrounding it. We are building faster than we are thinking. We are deploying before we are understanding. And unless we start to value architecture and ethics as much as innovation and speed, we may find that our binary creations inherit not our intelligence, but our carelessness.

The future does not have to be dystopian. It can still be intelligent, ethical, and secure, but only if we choose to build it that way.

If you would like to hear more from Sherlock he is speaking at our conference Think Digital Government on November 12th. You can register for a ticket here.

Event Logo

If you are interested in this article, why not register to attend our Think AI for Government conference, where digital leaders tackle the most pressing AI-related issues facing government today.


Register Now