The UK Government said it is putting improvements to public services and cyber resilience at the top of its AI agenda.

DSIT announced this week that researchers focused on boosting resilience against AI risks such as deepfakes, misinformation, and cyberattacks, can now access government grants.
The scheme, in partnership with the Engineering and Physical Sciences Research Council (EPSRC) and Innovate UK, part of UK Research and Innovation (UKRI), is focused on how society can be protected from the potential risks of AI. It will also support research to tackle the threat of AI systems failing unexpectedly, for example in the financial sector.
The government said ensuring public confidence in AI is central to its plans, as the UK “harnesses the technology to drive up productivity and deliver public services which are fit for the future.”
The government has also committed to introduce targeted legislation for the handful of companies developing the most powerful AI models, ensuring a proportionate approach to regulation rather than new blanket rules on its use.
If you liked this content…
The new programme hopes to drive research to identify the critical risks of frontier AI adoption in critical sectors like healthcare and energy services, identifying potential solutions which can then be transformed into tools which tackle potential AI risks in these areas.
“My focus is on speeding up the adoption of AI across the country so that we can kickstart growth and improve public services. Central to that plan though is boosting public trust in the innovations which are already delivering real change,” said Secretary of State for Science, Innovation, and Technology, Peter Kyle.
“That’s where this grants programme comes in. By tapping into a wide range of expertise from industry to academia, we are supporting the research which will make sure that as we roll AI systems out across our economy, they can be safe and trustworthy at the point of delivery.”
Launching the formal opening of its Systemic Safety Grants Programme, the UK’s AI Safety Institute is looking to back around 20 projects with funding of up to £200,000 each over the course of its first phase, worth £4 million. In total the fund is worth £8.5 million, with the additional cash to become available in due course as further phases are launched.
Applicants will be assessed on the potential issues their research could solve and what risks it addresses, having until November 26 to submit their proposals. Successful applicants will be confirmed in the end of January 2025, with the first round of grants then set to be awarded in February.