The government says it’s solved the problem YouTube hasn’t – and used AI (artificial intelligence) to help stop jihadist video content being seen on the Web.
Home Secretary Amber Rudd told the BBC while on a trip to Silicon Valley to promote the solution that she also might look to make use of it obligatory in UK law.
It commissioned a London-based AI firm called ASI Data Science to train a network to spot such material, investing £600,000 to do so.
The vendor claims the solution is able to detect 94% of Daesh (so-called Islamic State) online activity, with an accuracy of 99.995%, with grey area material sent on to humans for checking.
“[This software] is a very convincing example of the fact that you can have the information you need to make sure this material doesn’t go online in the first place,” Rudd told the broadcaster.
You might also like
“The technology is there. There are tools out there that can do exactly what we’re asking for. For smaller companies, this could be ideal.”
Facebook and Google have committed to unveil their own solution to the problem in reaction to MP criticism of their alleged slowness at dealing with the issue.
But it’s not just the Internet big boys facing the issue – according to Home Office figures between July and the end of 2017 extremist material appeared in almost 150 sites that had not been used for such propaganda before
“I remain convinced that the best way to take real action, to have the best outcomes, is to have an industry-led forum like the one we’ve got,” Rudd concluded.
She’s speaking at The Global Internet Forum to Counter Terrorism, launched last year, an event that is bringing together several governments including the US and UK, and major internet firms like Facebook, Google, Twitter and others.