The AI programme analyses the audio and images of a video during the upload process, so the majority of such material could be stopped before it even reaches the internet.
The tech can detect 94 per cent of Isis propaganda with a 99.99 per cent accuracy rate, meaning only 50 of every 1 million randomly selected videos would require additional human review.
Major tech companies Facebook and Twitter already use similar technology but the new model, developed in partnership by the Home Office and ASI Data Science, will be shared with smaller platforms like Vimeo, Telegraph and pCloud to help combat increasing abuse by terrorists.
Isis has used 400 different websites to upload their content last year, research has found.
Home secretary Amber Rudd said: “Over the last year we have been engaging with internet companies to make sure that their platforms are not being abused by terrorists and their supporters.
“I have been impressed with their work so far following the launch of the Global Internet Forum to Counter-Terrorism, although there is still more to do, and I hope this new technology the Home Office has helped develop can support others to go further and faster.”
She added: “The purpose of these videos is to incite violence in our communities, recruit people to their cause, and attempt to spread fear in our society.
“We know that automatic technology like this can heavily disrupt the terrorists’ actions, as well as prevent people from ever being exploited to these horrific images.”
It comes as Ms Rudd travels to Silicon Valley to discuss tackling terrorist content online.
As well as meeting tech CEOs in San Francisco, the home secretary will meet with Kirstjen Nielsen, secretary of Homeland Security, to discuss how the US and UK can work together to tackle terrorist content online.