Facebook Shuts Down 583 Million Accounts

Alvine Chaparadza Avatar
Facebook

Facebook just released crucial information that showcased how the social media company is acting against people producing inappropriate content or fake accounts. The social media giant has decided to eliminate all the glitches of the site by axing 583 million fake accounts (that’s almost a quarter of the total accounts) in the first three months of 2018 as a way to enforce community standards.

Facebook’s new report, which it plans to update twice a year, comes a month after the company published its internal rules for how reviewers decide what content should be removed.

This was as a result of the  intense scrutiny the company came under earlier this year over the use of private data and the impact of harmful content on its 2.2 billion monthly users, with governments around the world questioning the company’s policies.

Facebook said those closures came on top of blocking millions of attempts to create fake accounts every day. Despite this, the group said fake profiles still make up between 3% to 4% percent of all the current active accounts. It also claimed to have detected almost 100 percent of spam and to have removed 837 million posts that had spam.

Along with fake accounts, Facebook said in its transparency report that it had removed 21 million pieces of content featuring sex or nudity, 2.5 million pieces of hate speech and almost 2 million items related to terrorism.

How did Facebook do it

Facebook stated that artificial intelligence has played an essential role in helping the social media company flag down content. The social media network’s CEO Mark Zuckerberg credited AI’s role in taking down the material, but he added that it was hard to pinpoint hate speech variations. He wrote:

AI still needs to get better before we can use it to effectively remove more linguistically nuanced issues like hate speech in different languages, but we’re working on it

But humans still help

However, Facebook’s AI was far less effective at capturing hate speech, with just 38 percent of 2.5 million pieces of hate speech removed before humans reported it. Facebook said:

Artificial intelligence isn’t good enough yet to determine whether someone is pushing hate or describing something that happened to them so they can raise awareness of the issue,” Guy Rosen, Facebook’s VP of Product Management

To increase its effectiveness, the company says it has 10,000 human moderators helping it to remove objectionable content and plans to double that number by the end of the year.

Join Waitlist We will inform you when the product arrives in stock. Please leave your valid email address below.