Google will increase its use of technology to identify extremist and terrorism-related videos across its sites, which include YouTube, and will boost the number of people who screen for terrorism-related content, Google’s General Counsel Kent Walker wrote in an editorial in the Financial Times Sunday. (Shutterstock/Chonlachai)
Alphabet Inc.’s Google says it is creating new policies and practices to suppress terrorism-related videos, a response to U.K. lawmakers who have said the internet is a petri dish for radical ideology.
Google will increase its use of technology to identify extremist and terrorism-related videos across its sites, which include YouTube, and will boost the number of people who screen for terrorism-related content, Google’s General Counsel Kent Walker wrote in an editorial in the Financial Times Sunday. The company will also be more aggressive in putting warnings on and limiting the reach of content that, while not officially forbidden, is still inflammatory.
“While we and others have worked for years to identify and remove content that violates our policies, the uncomfortable truth is that we, as an industry, must acknowledge that more needs to be done,” Walker wrote.
Google and other social media and search sites are facing pressure to quickly remove posts by terrorist groups, such as Islamic State or ISIS. After seven people were killed and 48 injured in an attack in London this month, U.K. officials have focused on sites seen as enabling extremists to recruit followers, coordinate attacks and spread propaganda. Lawmakers have proposed new laws to regulate how social media platforms counter extremism online.
“We cannot allow this ideology the safe space it needs to breed,” U.K. Prime Minister Theresa May said earlier this month. The proposed legislation would force social networks to make user data available to domestic security forces.
In response, a spokesperson for the U.K.’s Home Office called on companies to work toward implementing technology to identify, remove and even prevent extremist content from being widely distributed on their sites.
“The measures being implemented by Google, particularly those relating to hateful speakers, are encouraging first steps,” a spokesperson said in an e-mailed statement Sunday. “However, we feel the technology companies can and must go further and faster, especially in identifying and removing hateful content itself.”
Google also is facing scrutiny on data security and antitrust in Europe. Earlier this year, advertisers pulled spending from Google’s YouTube over ads running on offensive content, including terrorism videos. In response, Google said it was increasing computing and staff resources to better flag the videos.
Walker said Google is also increasing its counter-propaganda efforts, employing technology that “harnesses the power of targets online advertising to reach potential Isis recruits and redirects them towards anti-terrorist videos that can change their minds about joining.”
“Together, we can build lasting solutions that address the threats to our security and our freedoms,” he wrote in the op-ed. ‘‘It is a sweeping and complex challenge. We are committed to playing our part.”