Editorial

NHS’s biggest barrier to AI isn’t technology, says BCN

AI could help ease pressure on waiting lists, diagnostics and admin across the NHS. But without interoperable data, clear ethics and strong governance, its promise risks becoming another costly distraction, warns Fraser Dear, head of AI and innovation at BCN.

Posted 18 December 2025 by Christine Horton


Many people still assume the NHS operates as a single, unified organisation. According to Fraser Dear, head of AI and innovation at technology service provider BCN, that belief is one of the biggest obstacles to meaningful AI adoption.

“There’s a fallacy that the NHS is a single entity, and it’s not,” he said. “Every NHS entity has multiple operational areas that don’t usually collaborate or connect their data.”

Within a single county, Acute, Community and Teaching Foundation Trusts often run entirely different systems, use different data formats and rarely share platforms. As a result, data is fragmented, duplicated and difficult to reconcile – a fundamental problem when AI relies on clean, connected information.

“The challenge is that the NHS is not a ‘national’ health service. The operating systems are all different, and none of them talk to each other. That is a massive challenge because data equals AI,” said Dear.

Even basic tasks, such as matching patient identities across systems, become complex. “How do you identify Bob Smith when there are 17 Bob Smiths in 20 different systems?” he asks.

AI must support clinicians – not replace them

For Dear, the language used around AI is as important as the technology itself. Framing AI as a replacement for clinicians is a fast route to resistance.

“If we’re saying AI is going to be the ‘Doctor of the Future’, you will not get a clinical healthcare professional wanting to use that system,” he said. “Effectively, we’ll be doing the job for them.”

Dear draws an ethical line between assistance and autonomy. “AI should be about supporting clinical decision-making, not replacing it. Humans should be making decisions about humans from a healthcare perspective, not AI.”

That distinction matters because healthcare decisions are not purely statistical. While AI can highlight probabilities or suggest pathways, it lacks context, judgement and lived understanding of the patient.

“AI doesn’t know who I am,” said Dear. “Should AI be making clinical determinations about me? That’s a moral and ethical question.”

Speed is not the same as safety

A recent Channel 4 Dispatches episode illustrated both the promise and the risk of AI in healthcare. In a diagnostic test, an AI system correctly diagnosed four out of six patients in 25 minutes. A human doctor diagnosed all six correctly, but took 67 minutes.

“The human doctor was concerned that the AI missed some key indicators for more serious illnesses,” said Dear. “This level of risk is not the kind of risk we want to consider within the healthcare sector.”

However, he is clear that the time-saving potential should not be ignored. “There is insight into the more than 50 per cent time saving that could happen in advance of the cases coming to the clinician,” he said.

Used correctly, AI can reduce preparation time and surface insights – but final decisions must remain human-led. “AI needs to be the clinician’s assistant, not the lead,” said Dear.

Inequality risks go beyond biased data

Much of the debate around AI and health inequality focuses on bias in training data. Dear argues the problem is broader.

“AI systems can introduce further health inequalities if not considered carefully,” he said, pointing to digital exclusion as a major risk.

Smartphone apps, online portals and AI-driven interfaces can unintentionally exclude older people, those with accessibility needs, communities with poor internet connectivity, or individuals who cannot afford digital devices.

“If anything, it’s going to make it harder for someone to interact with the healthcare system if we use AI exclusively to engage with them,” he said.

“These observations are not intended to say, ‘we should not do these things’,” Dear said. “But they are considerations that need to be made to ensure inequalities are not compounded.”

Treat AI like a member of staff

Governance, not technology, is where many AI programmes fail. Dear believes NHS organisations should manage AI in the same way they manage people.

“We need to create a governance structure that treats AI like any other member of staff,” he said.

That means defined roles, restricted access and continuous oversight. “You wouldn’t take a new member of staff and grant them global admin rights to access all systems,” he said. “Your data access and controls, and AI, should be handled in the same way.”

AI systems should be regularly reviewed to ensure they are still behaving as expected. “Is it doing what we asked it to do? Is it still doing it in six months?” asks Dear. “If not, we need to go back through that development loop.”

Crucially, accountability always sits with the organisation. “It is the organisation that is accountable for the outcome, not the AI,” he said.

Automation first, AI second

Dear is clear that many operational gains attributed to AI are actually the result of simpler process automation – and that’s no bad thing.

“Waiting list optimisation is about using process automation,” he said. Automating appointment rebooking or cancellations can save time and money without complex AI models.

Similarly, administrative tasks such as generating letters can use templated, rules-based systems, sometimes enhanced by generative tools. “Really that is more process automation using AI to do content creation,” he said.

True AI value emerges when analysing large volumes of structured and unstructured data to support diagnostics and predict outcomes – but only once the foundations are in place.

For Trusts eager to launch AI pilots, Dear said: “If a Trust doesn’t have a data strategy and isn’t in the cloud, don’t start thinking about AI pilots. You don’t have the foundation of your ‘AI building’ right.”

Many strategies are outdated, he added. “Some clients are working from data strategies that were conceived five years ago,” he said. “The world has changed so much… these strategies are simply no longer relevant.”

Event Logo

If you are interested in this article, why not register to attend our Think Data for Government conference, where digital leaders tackle the most pressing issues facing government today.


Register Now