Editorial

Legacy and innovation: ensuring technology debt doesn’t sink digital transformation

Tanium’s Chief IT Architect in EMEA, Oliver Cronk writes a guest blog post legacy systems and innovation.

Posted 18 August 2020 by Matt Stanley


The harsh reality is that the hot technologies of today will become the legacy systems of tomorrow. That’s why digital transformation doesn’t have a completion date. It’s a continuous process of adopting new systems and phasing out those that cannot keep paces with modern digital business reality. Unless they’ve had massive transformation funding, this continuous transformation approach is particularly difficult in the public sector. Most public sector organisations have layers of different systems that have been built up over time. Transformation programmes often run over budget or get cut due to changes in political priorities, leading to complex architecture that straddles different environments. That means a blend of older mission critical on-premise systems and a host of hybrid cloud-hosted, cloud native and SaaS capabilities that aren’t easily monitored, managed, or secured by legacy on-premises tools.

Unfortunately, this complexity and the technical debt it often generates can kill digital transformation projects. The answer of course requires layers of people, process and technology to drive insight and control over complex heterogeneous environments. For organisations that need to manage and secure both legacy and emerging technology, broad, platform-agnostic visibility is key. Unified endpoint management and security can help to provide that visibility, as well as control, across heterogenous IT environments. 

Oliver Cronk – Chief IT Architect EMEA at Tanium

Technology Archeology

The pressure to innovate and deliver faster, coupled with staff turnover means that the organisational memory and documentation of systems can often be lacking. A kind of “technology archaeology” is therefore required to help public sector IT bosses understand what they have today and where they can build on top of that to digitally transform. You need to be able to answer questions like:

  • What can and should we rip and replace vs keep the lights on and augment?
  • What did my predecessors and colleagues put in place that is undocumented and unknown? What does the server do that isn’t on any asset list?
  • What technology assets have completely fallen off the radar of IT management systems (such as the CMDB) because they are reliant on manual processes that get bypassed during crunch “just do it” periods?
  • What connects to what? If I take something out, upgrade it or shift it to the cloud (or all of a sudden need thousands of remote users?), what impact will it have on upstream and downstream systems?
  • What tech do we have that is no longer supported by the vendor, our in-house team or suppliers?
  • What’s its status? Determine the level of IT hygiene (patch status, administrator users, security configuration etc)

Similar questions to there were faced by Network Rail in a recent hybrid cloud project.

Don’t get killed by your complexity

The difficulty, of course, is continuing to improve public services by leveraging the power and flexibility of emerging tech, whilst keeping the challenges and risks of older systems in check. Impact assessments of change projects are becoming harder as it’s not always obvious what systems are dependent on each other.  Projects are often delayed by a system that no one knew was consuming data from the platform being upgraded.

If organisations don’t have a complete picture of their IT environment, there could be some severe consequences of pushing ahead with digital transformation. It may:

  • Expose failings in your legacy backend, if you don’t sanitise and manage input
  • Expose vulnerabilities, unless you understand what legacy systems are running and whether they need to be patched, rebooted (to apply patches), and/or mitigated in other ways
  • DDOS your legacy systems by accident through public APIs, unless you manage capacity and think about integration architecture — perhaps leveraging event driven buffering and eventual consistency 

Visibility everywhere

There’s no shortage of digital ambition in government. GDS head of technology policy, Rhiannon Lawson proudly exclaimed a few months ago that “cloud first is here to stay”, while the National Cyber Security Centre (NCSC) has been developing guidance for organisations taking their first steps in the public cloud. But public sector IT leaders must first have visibility and control over all of their assets and understand how they map and fit together, if they are to avoid the risk of project failure, outages and potential threats. 

It’s somewhat concerning that over half (53%) of those we spoke to for a recent study claimed that a lack of visibility into their network leaves them vulnerable to cyber-attacks, for example.

This is where Tanium can help. 

Our unified endpoint management and security platform enables organisations to gain complete insight into and control of their legacy and digital technology assets: including laptops, servers, virtual machines, containers and cloud infrastructure. Our unique, linear chain architecture means we can do this at incredible speed and scale, delivering data in near real-time to keep CMDBs up-to-date, vulnerable endpoints patched and much more. With Tanium, public sector IT leaders get the insight they need to ensure digital transformation projects aren’t derailed by technical debt.

This is especially important in the current climate. As recent events have shown, crisis moments can come out of nowhere and blind-sided IT teams are often stuck in reactive mode. That’s when you need real-time visibility into performance bottlenecks and creaking architecture, to pre-empt problems and make sure they don’t become the next fire to fight after the home working challenge has been tackled.

About Oliver:

Oliver Cronk is the Chief IT Architect for EMEA at Tanium. Oliver is an IT architecture and DevOps leader with over 18 years experience in a variety of IT roles across energy, government, telecommunications, banking and professional services.

Oliver joined Tanium from Deloitte where he spent 3 years as the Chief Architect for Risk Advisory – responsible for driving innovation, digital transformation and architecture of solutions for clients across a variety of risk areas. Oliver has led on the architecture of on prem, private cloud, Azure, Office 365 and AWS initiatives and is AWS certified. Oliver leads on the Tanium Reference Architecture for IT Operations and Cyber capabilities.

Oliver is a seasoned speaker and advisor on Innovation, Architecture and DevOps. Oliver has a BSc (Hons) in Computer Science from the University of Essex and has been a BCS Chartered IT Professional since 2012.

Event Logo

If you are interested in this article, why not register to attend our Think Digital Government conference, where digital leaders tackle the most pressing issues facing government today.


Register Now