Democratic presidential candidate Hillary Rodham Clinton urged social media companies to help shut down the radicalization and recruitment happening online in a speech outlining her strategy for combatting the Islamic State after the Paris terror attacks.
“There is no doubt we have to do a better job contesting online space, including websites and chat rooms where jihadists communicate with followers,” Clinton said Thursday in remarks before the Council on Foreign Relations. “We must deny them virtual territory, just as we deny them actual territory.”
The former U.S. Secretary of State said private social media companies should assist in this effort by swiftly shutting down terrorist accounts so they cannot be used to “plan, provoke or celebrate violence.”
The hacker activist group Anonymous has taken this project on for itself, claiming responsibility for disabling thousands of pro-ISIS Twitter accounts. The group posted lists of accounts to the website Pastebin and encouraged Twitter users to flag the accounts through the social network’s standard reporting channels.
If Twitter receives a report that an account is violating the company’s policies, which ban using the service to make threats or promote violence, it will remove the account, according to the spokesperson.
YouTube’s use policy allows it remove videos that feature real depictions of graphic violence when such footage is used without context or educational information.
Update:
“YouTube has clear policies prohibiting terrorist recruitment and content intending to incite violence and we remove videos violating these policies when flagged by our users,” a YouTube spokesperson said in a statement. “We also terminate accounts run by terrorist organizations or those that repeatedly violate our policies. We allow videos posted with a clear news or documentary purpose to remain on YouTube, applying warnings and age-restrictions as appropriate.”
Facebook issued a statement saying that “there is no place for terrorists on Facebook.” The social network said its global team works around the clock and immediately reviews posts that users flag as raising safety concerns.
“We work aggressively to ensure that we do not have terrorists or terror groups using the site, and we also remove any content that praises or supports terrorism,” Facebook said in a statement. “We have a community of more than 1.5 billion people who are very good at letting us know when something is not right. We make it easy for them to flag content for us and they do. We have a global team responding to those reports around the clock, and we prioritize any safety-related reports for immediate review.”
Once a Facebook user is found to be violating the company’s guidelines, Facebook will start proactively searching for inappropriate content related to that account. Here’s how Monika Bickert, head of content policy at Facebook, described it to us last September:
“In some [instances], if we find a violation, we’ll then use our special teams to do a deeper investigation into the account that was responsible for that violation. We’ll also use automated tools to try and find associated accounts or [inappropriate] content.”
[“source-recode”]