Facebook removed 583 million fake accounts

Doris Richards
May 18, 2018

Apart from the removal, it also moderated 1.9 million pieces of information related to terrorism, 2.5 million posts that meant to spread hatred, 21 million pornographic contents, and 3.4 million graphics that depicted violence. The social media giant has always been under pressure from the government, activists as well as academics to reveal how it handles such posts.

With over 2 billion active monthly users, filtering out hate speech, violence, and sex is a complex issue.

For its part, Facebook has promised to take down the content similar to those spotted by the nonprofit, wherever they find it on their platform.

It may be a fraction of the rubbish that Facebook has taken out these past few months, but terrorism-related content on the platform still numbered in the millions, according to the report.

Facebook has a clear problem with hate speech and terrorism-related content, and this week's revelations just underscored how prevalent these kinds of content is on the social media platform. It also took down or applied warning labels to 3.5 million posts of graphic violence, and removed 2.5 million hate speech posts, 837 million spam posts and 583 million fake accounts in the first quarter of this year.

The majority of fake accounts were blocked within minutes of registration, Facebook said, touting its artificial intelligence (AI) auto-flag, auto-destroy technologies.

Such reports, the company is going to share every six months. In terms of numbers, the objectionable content floating around on Facebook, at least the ones it has caught, are simply staggering. For instance, the company estimated that for every 10,000 times that people looked at content on its social network, 22 to 27 of the views may have included posts that included impermissible graphic violence.

Despite the size of the report, it wasn't a complete one.

Pixabay/FirmBeeFacebook has been facing increased pressure to take off hate speech and inappropriate content off its platform, and moderators are some of the worse affected.

It came in the form of the first release of the company's Community Standards Enforcement Report, and it was stuffed with the type of detail that Mark Zuckerberg told so many Congresspeople he'd need to get back to them on when he was first lightly sautéed and then flame-grilled in two days of testimony.

The director for worldwide freedom of expression, Electronic Frontier Foundation, Jillian York, said she was happy with the report. York said that this was a good move and was long awaited. The report did not, however, give specifics about what types of reports or data were governments asking the company for. The next step, however, needs to be more transparency around how Facebook classifies content as well as what will be removed in future.

Other reports by Iphone Fresh

Discuss This Article