National Grid is exploring how artificial intelligence (AI) can help its risk and compliance teams monitor cybersecurity threats and regulatory changes across complex infrastructure systems.

Speaking at the ServiceNow AI Summit in London, Jody Elliott, head of risk and sustainability at the energy infrastructure operator, said AI is becoming an important tool for analysing operational data at a scale that human teams struggle to manage.
Utilities like National Grid run vast digital estates supporting electricity transmission networks in the UK and parts of the United States. That environment generates large volumes of data across hundreds of technology projects, making it difficult for risk teams to maintain oversight.
“In large organisations you’ve got multiple agile projects running. From a risk perspective, how do I have my sight of every story and feature in every planning session and every backlog that’s continuously running?,” said Elliott.
Embedding risk specialists directly within every project is impractical, he added, leaving organisations reliant on governance frameworks and policies to monitor development activity. Generative AI (gen AI) offers a way to analyse those environments more efficiently.
“Generative AI in particular gives us the opportunity to analyse all that unstructured data,” said Elliott, explaining that the technology can highlight emerging risks across development backlogs and operational systems.
Rather than reviewing thousands of updates manually, AI tools can identify the most significant issues and flag them for investigation. That allows risk teams to focus on the areas where security or regulatory problems are most likely to occur.
Prioritising cybersecurity threats
National Grid is also testing AI tools designed to improve vulnerability management across its technology estate.
The organisation already collects extensive endpoint data from systems across its network, including information on operating systems and patch levels. However, correlating that data with information about newly disclosed vulnerabilities can be time-consuming.
“You could do it with a human, but it would take you some time and you’d be doing it as a full-time job,” said Elliott.
To address this, the company developed an AI agent that automatically combines endpoint data with information on known vulnerabilities and exploit reports. The system can analyse those data sources in near real time and identify the most critical security risks.
“We built the agent in about an hour,” said Elliott. Once operational, it took roughly “90 seconds to run and output the results.”
Operational teams then spent several days validating the findings to confirm the accuracy of the analysis. A key advantage of the approach is the ability to incorporate business context into cybersecurity decisions.
If you liked this content…
“If you overlay that with HR data,” said Elliott, organisations can identify whether vulnerable devices belong to senior executives or critical operational teams.
That context allows security teams to prioritise remediation efforts based on potential business impact rather than technical severity alone.
“It’s that business context piece that AI really elevates,” he said.
Monitoring regulatory change
Another area where National Grid is experimenting with AI is regulatory compliance.
Energy companies operate under extensive regulatory frameworks across multiple jurisdictions, requiring teams to monitor changes in legislation and ensure internal policies remain compliant.
Elliott said the company has developed an AI agent that tracks regulatory updates across multiple sources, including UK government policy changes and regulatory developments in US states where the company operates.
The system scans updates from frameworks such as SIP, SOX and PCI and compares them with the organisation’s internal control structures. By analysing a rolling 12-month window of regulatory updates and projecting future developments, the tool helps identify areas where policies or controls may need to change.
“That agent is looking at a 12-month trailing update of all of those regulations,” said Elliott, while also analysing the company’s control framework to determine “what we need to think about changing”.
The analysis also looks ahead, giving teams a forward view of regulatory developments over the next year.
Balancing speed and trust
Despite the potential benefits, Elliott said organisations must ensure employees understand the limits of AI systems. One challenge is the risk that staff begin trusting AI outputs without questioning them.
“There’s a risk that people become subject matter experts when they’re not subject matter experts,” he said.
To address this, National Grid has implemented AI training programmes across the organisation, covering employees from executive leadership to technical specialists. The aim is to ensure staff understand how AI systems work and where human judgement remains essential.
“It’s not a one-and-done,” said Elliott. “We need to reinforce that continually.”








