Social media says it has terrorists off their pages more and more

Social media says it has terrorists off their pages more and more

Published
Senate Commerce Committee chairman Sen. John Thune (R-S.D.) said the hearing was “a really good first step.” (Official photo)

WASHINGTON — Top executives from Twitter, YouTube, and Facebook told a Senate committee on Wednesday that more resources and more personnel are going toward ferreting out and taking down terrorist postings even though there are no “magic algorithms” to locate hateful content.

“As is the case with a determined adversary, as we make it harder for terrorists to use Twitter, their behavior evolves,” said Carlos Monje, Twitter’s head of public policy and philanthropy.

“To stay in front of this, we continue to invest in technology to prevent new accounts being opened to replace those we suspend, while also developing further the tools that prevent the distribution of propaganda in the aftermath of attacks.”

Members of the Senate Commerce, Science and Transportation Committee acknowledged the efforts but questioned if the companies were doing enough.

“This is a really important issue,” Sen. Jon Tester (D-Mont.) said. “Our democracy is at risk.”

The tech giants have come under fire in the U.S. and Europe for allowing their websites and programs to be used by terrorists groups and other extremists for recruiting and propaganda.

“Based on results, you’re not where you need to be for us to be reassured that you’re securing our democracy,” Sen. Brian Schatz (D-Hawaii) said. “How can we know that you’re going to get this right before the midterms?”

After the hearing, committee chairman Sen. John Thune (R-S.D.), told reporters the exchanges were “a really good first step.” Thune said overall that those testifying “were pretty responsive and I think we got a better sense for the things that they’re already doing.”

Some experts were less complimentary.

“Today Facebook, Twitter and YouTube representatives stonewalled members of the U.S. Senate Commerce, Science and Transportation Committee, by only boasting about their ineffective policies and not providing any solutions to combat the continuing threat of terrorist activity on their platforms,” Eric Feinberg, founder of GIPEC, told TMN. Since 2012 GIPEC has been monitoring, tracking, exposing, reporting and taking down ISIS and other terrorist activity  including recruitment efforts and plots — across all social media platforms.

“It makes no sense that social media platforms are not regulated by the FCC just like traditional media platforms,” he said.

Terrorist groups have stepped up the use of bots and other techniques to fight the measures the social media companies use to thwart them.

In addition, the groups are expanding to smaller platforms and messaging apps with encryption and less ability to police users, like Telegram, Reddit and WhatsApp, despite those platforms having a more limited reach.

Much of the hearing focused on advancement in machine learning techniques that has greatly helped in detecting terrorist content, partnerships between tech companies to coordinate anti-terrorist information and data shaping and algorithms to quickly detect and delete attempts to re-upload terrorist content.

“Since June, YouTube has removed over 160,000 violent extremist videos and has terminated approximately 30,000 channels for violation of our policies against terrorist content,” said Juniper Downs, YouTube’s public policy director. “We achieved these results through tougher policies, enhanced enforcement by machines and people, and collaboration with outside experts.”

Monje said Twitter has used algorithms to suspend 1.1 million terrorist accounts since 2015 and nearly a half million more in 2017. Of those suspended accounts, Monje said 75 percent are suspended before they even tweet once.

There is also the idea of counter-direct, which sends anti-terror messages to people likely to seek out extremist content, according to Monika Bickert, Facebook’s head of global policy management.

While the chief focus was on efforts to block terrorist groups, the hearing also raised questions about how social media companies deal with other hateful, conspiratorial or abusive content on their platforms, including racist and neo-Nazi, alt-right messaging.

Sen. Ed Markey (D-Mass.), said the tech companies also need to crack down on weapons sales made available through their social platforms, including those in “groups” in the different media.

  • Subscribe to Talk Media News


  • NO COMMENTS

    Leave a Reply

    Your email address will not be published. Required fields are marked *

    This site uses Akismet to reduce spam. Learn how your comment data is processed.