To be fair, it’s harder to solve than people think!
The end of the 2016 presidential election made many reevaluate their feelings on a number of subjects, none more important than how people process election news. Over the last decade, Facebook has become a major platform for how people get their news.
While services like Twitter cater to those that want only the hottest of takes when it comes to political commentary, Facebook tries to distill news stories that only you would be interested in. This causes a whole host of other problems (more on that in a bit), however the biggest problem that the 2016 presidential elections showed is how fast fake news can become viral!
For a perfect example of this, look no further than the bizarre conspiracy theory of Hillary Clinton being part of an international child enslavement ring that was being held at a local pizzeria in DC!
Yup, people actually thought this was a thing!
So how did a conspiracy theory that started on message boards like 4Chan get just under 10% of voters (!!!) believing Clinton was connected to a child sex ring? Well, the answer is a complicated one. It’s a combination of many factors which include; spammers at click farms trying to make fake news stories viral, some pro-Trump supporters pushing fakes news stories, people not doing enough due diligence to fact check the stuff they read, and a million other reasons. It wasn’t till after the election that many started to lay the blame specifically at Facebook!
In some ways, Facebook has been the biggest benefactor of fake news spreading. Considering Facebook’s whole existence is supported by the fact that people using their platform, why would they care how people use their service? The initial ambivalence on the problem of fake news could be seen in Mark Zuckerberg’s original comments dismissing the idea that fake news had contributed to Donald Trump’s presidential win, but after some criticism, the company looks to have introduced an initiative to curb fake news on its platform.
With the company addressing the issue in a blog post some weeks back, Facebook hopes to combat the tide of fake news by working with prominent third-party fact-check organizations like Snopes, FactCheck.org, and Politifact. The idea is to flag fake news stories that these fact-checking organizations deem to be suspect. While this does definitely help with spotting fake news stories, the truth is it does very little to attack the core problem of people actually believing and sharing fake news online.
The 2016 presidential elections showed that Facebook’s fake news problem goes deeper than just pointing it out online. Much of it stems from information bias in where Facebook acts as an echo chamber for many of its users. The problem is an old one in that we want our opinions to be validated. Whether it’s fake Donald Trump quotes or child sex ring conspiracy theories involving Hillary Clinton, we want to believe the worst in our political opponents. It’s not that Facebook serves as the conduit for fake news to exist, but acts as gasoline on top of a fire that has always been there.
Facebook adding fact checking is a solid admission that there is a problem, but in 2017, it’s a long way to actually fixing a problem that goes much deeper.