Facebook continued to give the public a peek behind the curtain, releasing a major report that announced the Silicon Valley company removed more than one billion fake accounts. Facebook also said it purged millions of posts that violate its rules in the last year.
The first-ever “Community Standards Enforcement Report,” a robust 81 pages, details the company’s efforts to weed out unsavory content, including violence and terrorist propaganda. The report accounted for the fourth quarter of 2017 and first quarter of 2018.
Facebook disabled nearly 1.3 billion “fake” accounts over the past two quarters, many of them bots “with the intent of spreading spam or conducting illicit activities such as scams,” the company said on Monday.
Facebook disabled 583 million accounts in Q1 2018, down from 694 million accounts in Q4 of last year, a decrease the company attributes to its “variability of our detection technology’s ability to find and flag them.”
Most of the accounts “were disabled within minutes of registration,” Facebook claimed in a blog post, but Facebook doesn’t catch all fake accounts. The company estimates that 3 percent to 4 percent of its monthly active users are “fake,” up from 2 percent to 3 percent in Q3 of 2017, according to filings documents.
Those numbers are big, a reminder of what Facebook is up against just 18 months after it was learned that a Russian troll farm used Facebook to try and influence the 2016 U.S. presidential election.
Facebook says it finds most of the accounts on its own using software algorithms, but a small percentage — about 1.5 percent of the disabled accounts — were discovered after they were flagged by Facebook users.
The numbers Facebook is sharing this time focus on major content categories. The company removed 21 million “pieces of adult nudity or porn,” for example, the vast majority of which was discovered using software programs. It also removed 2.5 million pieces of “hate speech,” 56 percent more content than the 1.6 million pieces it removed in Q4.
Unlike nudity or terrorism-related content, though, hate speech is still primarily discovered by humans, not software programs. Only 38 percent of the hate speech Facebook removed in Q1 was first identified by algorithms. That’s an improvement over 23.6 percent in Q4, but still much smaller than some of the other content categories Facebook looks for.
That makes sense, as “hate speech” is much more subjective than nudity. What one person might describe as hate speech, another might describe as free speech. The fact that Facebook still has trouble detecting it without human help shows that the problem won’t go away anytime soon.
“Hate speech is really hard,” said Alex Schultz, Facebook’s VP of analytics, in a briefing with reporters. “There’s nuance, there’s context. The technology just isn’t there to really understand all of that, let alone in a long long list of languages.”
Here’s a snapshot of the six areas Facebook cracked down on.
Bogus Accounts: Facebook disabled 583 million fake accounts during the Q1 of 2018, and 694 million the quarter before. The social network removed 98.5 percent of these accounts before they were reported in Q1.
Sexual Stuff: Facebook’s relationship with nudity is tricky. The company restricts sexual content and nudity because some users “may be sensitive to this type of content,” according to its guidelines. There are some allowances, however, including protests and works of art. Still, the company removed roughly 42 million pieces of racy content for the two aforementioned quarters — accounting for less than a tenth of a percent of content viewed on Facebook.
Graphic Violence: Facebook took action on 1.2 million pieces of graphic violence during Q4 2017, and 3.4 million during the first quarter of 2018. The company said the spike is due largely to implementing better tools for finding inappropriate content.
Mr Americana, Overpasses News Desk
May 16th, 2018