Facebook, Apple remove most of U.S. conspiracy theorist’s content

FILE PHOTO: Alex Jones from Infowars.com speaks during a rally in support of Republican presidential candidate Donald Trump near the Republican National Convention in Cleveland, Ohio, U.S., July 18, 2016. REUTERS/Lucas Jackson/File Photo

By Rich McKay

ATLANTA (Reuters) – Facebook Inc announced on Monday that it had removed four pages belonging to U.S. conspiracy theorist Alex Jones for “repeatedly posting content over the past several days” that breaks its community standards.

The company said it removed the pages “for glorifying violence, which violates our graphic violence policy and using dehumanizing language to describe people who are transgender, Muslims and immigrants, which violates our hate speech policies.”

“Facebook bans Infowars. Permanently. Infowars was widely credited with playing a key role in getting Trump elected. This is a co-ordinated move ahead of the mid-terms to help Democrats. This is political censorship. This is culture war,” Infowars editor-at-large Paul Joseph Watson tweeted https://twitter.com/PrisonPlanet/status/1026433061469257733.

Neither Jones nor a representative for Infowars was available for comment.

Since founding Infowars in 1999, Jones has built a vast audience. Among the theories he has promoted is that the Sept. 11, 2001, attacks on New York and Washington were staged by the government.

Facebook had earlier suspended the radio and Internet host’s personal profile for 30 days in late July from its site for what the company said was bullying and hate speech.

Most of Jones’s podcasts from his right-wing media platform Infowars have been removed from Apple Inc’s iTunes and podcast apps, the media news website BuzzFeed quoted a company spokesman as saying on Sunday.

Apple told BuzzFeed that it had removed the entire library for five of Jones’s six Infowars podcasts including the shows “War Room” and the daily “The Alex Jones Show.”

Only one program provided by Infowars, “RealNews with David Knight” remained on Apple’s platforms on Sunday, according to news media accounts.

The moves by Apple and Facebook are the most sweeping of a recent crackdown on Jones’s programs by online sites that have suspended or removed some of his conspiracy-driven content. An Apple spokeswoman said in a statement that the company “does not tolerate hate speech” and publishes guidelines that developers and publishers must follow.

“Podcasts that violate these guidelines are removed from our directory making them no longer searchable or available for download or streaming,” Apple said in a statement. “We believe in representing a wide range of views, so long as people are respectful to those with differing opinions.”

Also, Spotify, a music, and podcast streaming company said on Monday that it had now removed all of Jones’s Infowars programs from its platform. Last week it removed just some specific programs.

“We take reports of hate content seriously and review any podcast episode or song that is flagged by our community,” a representative said Monday.

“Due to repeated violations of Spotify’s prohibited content policies, The Alex Jones Show has lost access to the Spotify platform,” the representative said.

Jones has also promoted a theory that the 2012 Sandy Hook school massacre was faked by left-wing forces to promote gun control. The shooting left 26 children and adults dead at a Connecticut elementary school.

He is being sued in Texas by two Sandy Hook parents, seeking at least $1 million, claiming that they have been the subject of harassment driven by his programs.

(Reporting by Rich McKay; Additional reporting by Ishita Chigilli Palli and Arjun Panchadar in Bengaluru and Stephen Nellis in San Francisco; Editing by Emelia Sithole-Matarise, Mark Potter, Susan Thomas, Bernard Orr and Jonathan Oatis)

Facebook says posts with graphic violence rose in early 2018

FILE PHOTO: Silhouettes of mobile users are seen next to a screen projection of Facebook logo in this picture illustration taken March 28, 2018. REUTERS/Dado Ruvic/Illustration/File Photo

By David Ingram

MENLO PARK, Calif. (Reuters) – The number of posts on Facebook showing graphic violence rose in the first three months of the year from a quarter earlier, possibly driven by the war in Syria, the social network said on Tuesday, in its first public release of such data.

Facebook said in a written report that of every 10,000 pieces of content viewed in the first quarter, an estimated 22 to 27 pieces contained graphic violence, up from an estimate of 16 to 19 late last year.

The company removed or put a warning screen for graphic violence in front of 3.4 million pieces of content in the first quarter, nearly triple the 1.2 million a quarter earlier, according to the report.

Facebook does not fully know why people are posting more graphic violence but believes continued fighting in Syria may have been one reason, said Alex Schultz, Facebook’s vice president of data analytics.

“Whenever a war starts, there’s a big spike in graphic violence,” Schultz told reporters at Facebook’s headquarters.

Syria’s civil war erupted in 2011. It continued this year with fighting between rebels and Syrian President Bashar al-Assad’s army. This month, Israel attacked Iran’s military infrastructure in Syria.

Facebook, the world’s largest social media firm, has never previously released detailed data about the kinds of posts it takes down for violating its rules.

Facebook only recently developed the metrics as a way to measure its progress, and would probably change them over time, said Guy Rosen, its vice president of product management.

“These kinds of metrics can help our teams understand what’s actually happening to 2-plus billion people,” he said.

The company has a policy of removing content that glorifies the suffering of others. In general it leaves up graphic violence with a warning screen if it was posted for another purpose.

Facebook also prohibits hate speech and said it took action against 2.5 million pieces of content in the first quarter, up 56 percent a quarter earlier. It said the rise was due to improvements in detection.

The company said in the first quarter it took action on 837 million pieces of content for spam, 21 million pieces of content for adult nudity or sexual activity and 1.9 million for promoting terrorism. It said it disabled 583 million fake accounts.

(Reporting by David Ingram; Editing by Clarence Fernandez)

CEO Zuckerberg says Facebook could have done more to prevent misuse

FILE PHOTO: Facebook CEO Mark Zuckerberg speaks on stage during the Facebook F8 conference in San Francisco, California, U.S., April 12, 2016. REUTERS/Stephen Lam/File Photo

By Dustin Volz and David Shepardson

WASHINGTON (Reuters) – Facebook Inc Chief Executive Mark Zuckerberg told Congress on Monday that the social media network should have done more to prevent itself and its members’ data being misused and offered a broad apology to lawmakers.

His conciliatory tone precedes two days of Congressional hearings where Zuckerberg is set to answer questions about Facebook user data being improperly appropriated by a political consultancy and the role the network played in the U.S. 2016 election.

“We didn’t take a broad enough view of our responsibility, and that was a big mistake,” he said in remarks released by the U.S. House Energy and Commerce Committee on Monday. “It was my mistake, and I’m sorry. I started Facebook, I run it, and I’m responsible for what happens here.”

Zuckerberg, surrounded by tight security and wearing a dark suit and a purple tie rather than his trademark hoodie, was meeting with lawmakers on Capitol Hill on Monday ahead of his scheduled appearance before two Congressional committees on Tuesday and Wednesday.

Zuckerberg did not respond to questions as he entered and left a meeting with Senator Bill Nelson, the top Democrat on the Senate Commerce Committee. He is expected to meet Senator John Thune, the Commerce Committee’s Republican chairman, later in the day, among others.

Top of the agenda in the forthcoming hearings will be Facebook’s admission that the personal information of up to 87 million users, mostly in the United States, may have been improperly shared with political consultancy Cambridge Analytica.

But lawmakers are also expected to press him on a range of issues, including the 2016 election.

“It’s clear now that we didn’t do enough to prevent these tools from being used for harm…” his testimony continued. “That goes for fake news, foreign interference in elections, and hate speech, as well as developers and data privacy.”

Facebook, which has 2.1 billion monthly active users worldwide, said on Sunday it plans to begin on Monday telling users whose data may have been shared with Cambridge Analytica. The company’s data practices are under investigation by the U.S. Federal Trade Commission.

London-based Cambridge Analytica, which counts U.S. President Donald Trump’s 2016 campaign among its past clients, has disputed Facebook’s estimate of the number of affected users.

Zuckerberg also said that Facebook’s major investments in security “will significantly impact our profitability going forward.” Facebook shares were up 2 percent in midday trading.

ONLINE INFORMATION WARFARE

Facebook has about 15,000 people working on security and content review, rising to more than 20,000 by the end of 2018, Zuckerberg’s testimony said. “Protecting our community is more important than maximizing our profits,” he said.

As with other Silicon Valley companies, Facebook has been resistant to new laws governing its business, but on Friday it backed proposed legislation requiring social media sites to disclose the identities of buyers of online political campaign ads and introduced a new verification process for people buying “issue” ads, which do not endorse any candidate but have been used to exploit divisive subjects such as gun laws or police shootings.

The steps are designed to deter online information warfare and election meddling that U.S. authorities have accused Russia of pursuing, Zuckerberg said on Friday. Moscow has denied the allegations.

Zuckerberg’s testimony said the company was “too slow to spot and respond to Russian interference, and we’re working hard to get better.”

He vowed to make improvements, adding it would take time, but said he was “committed to getting it right.”

A Facebook official confirmed that the company had hired a team from the law firm WilmerHale and outside consultants to help prepare Zuckerberg for his testimony and how lawmakers may question him.

(Reporting by David Shepardson and Dustin Volz; Editing by Bill Rigby)

Social media companies accelerate removals of online hate speech

A man reads tweets on his phone in front of a displayed Twitter logo in Bordeaux, southwestern France, March 10, 2016. REUTERS/Regis

By Julia Fioretti

BRUSSELS (Reuters) – Social media companies Facebook, Twitter and Google’s YouTube have accelerated removals of online hate speech in the face of a potential European Union crackdown.

The EU has gone as far as to threaten social media companies with new legislation unless they increase efforts to fight the proliferation of extremist content and hate speech on their platforms.

Microsoft, Twitter, Facebook and YouTube signed a code of conduct with the EU in May 2016 to review most complaints within a 24-hour timeframe. Instagram and Google+ will also sign up to the code, the European Commission said.

The companies managed to review complaints within a day in 81 percent of cases during monitoring of a six-week period towards the end of last year, EU figures released on Friday show, compared with 51 percent in May 2017 when the Commission last examined compliance with the code of conduct.

On average, the companies removed 70 percent of the content flagged to them, up from 59.2 percent in May last year.

EU Justice Commissioner Vera Jourova has said that she does not want to see a 100 percent removal rate because that could impinge on free speech.

She has also said she is not in favor of legislating as Germany has done. A law providing for fines of up to 50 million euros ($61.4 million) for social media companies that do not remove hate speech quickly enough went into force in Germany this year.

Jourova said the results unveiled on Friday made it less likely that she would push for legislation on the removal of illegal hate speech.

‘NO FREE PASS’

“The fact that our collaborative approach on illegal hate speech brings good results does not mean I want to give a free pass to the tech giants,” she told a news conference.

Facebook reviewed complaints in less than 24 hours in 89.3 percent of cases, YouTube in 62.7 percent of cases and Twitter in 80.2 percent of cases.

“These latest results and the success of the code of conduct are further evidence that the Commission’s current self-regulatory approach is effective and the correct path forward.” said Stephen Turner, Twitter’s head of public policy.

Of the hate speech flagged to the companies, almost half of it was found on Facebook, the figures show, while 24 percent was on YouTube and 26 percent on Twitter.

The most common ground for hatred identified by the Commission was ethnic origin, followed by anti-Muslim hatred and xenophobia, including expressions of hatred against migrants and refugees.

Pressure from several European governments has prompted social media companies to step up efforts to tackle extremist online content, including through the use of artificial intelligence.

YouTube said it was training machine learning models to flag hateful content at scale.

“Over the last two years we’ve consistently improved our review and action times for this type of content on YouTube, showing that our policies and processes are effective, and getting better over time,” said Nicklas Lundblad, Google’s vice president of public policy in EMEA.

“We’ve learned valuable lessons from the process, but there is still more we can do.”

The Commission is likely to issue a recommendation at the end of February on how companies should take down extremist content related to militant groups, an EU official said.

(Reporting by Julia Fioretti; Additional reporting by Foo Yun Chee; Editing by Grant McCool and David Goodman)