office with Fakenews Alert written in the foreground and a magnifying glass on the word Alert

Can we use technology to quell the rising tide of fake news and ‘alternative facts’?

  • Unfortunately, fake news is big business
  • The consumer just can’t tell the difference anymore
  • A lot can be done and is being done to combat the ‘alternative facts’ epidemic
  • What are some of the possible measures to combat the spreading of fake news?

In the wake of a controversial US election season, allegedly influenced by ‘fake news’, lots of attention has been turned towards the fake news and ‘alternative facts’ epidemic. Fake news, ‘a combination of wild assertions, deliberate propaganda and speculation distributed via social media and ranked highly by search engines’, is a very serious problem. It influences the thoughts and actions of real people and many are struggling to determine what is real news and what is fake. In fact, according to Buzzfeed, during the final weeks of the US election, fake news even outperformed real news on Facebook.

While fake news has always been around, could technologies like the Facebook algorithm have exacerbated the problem? More importantly, how can we control the proliferation of false news and deliberate propaganda? Google announced that they would ban fake news sites from using their online advertising service and Facebook has outlined various fake news curbing strategies, such as flagging suspicious content. But what else can be done? Should we implement smarter technology or rely on human intervention such as fact checking?

Unfortunately, fake news is big business

Because consumers of social media tend to have a ferocious appetite for (fake) news, producing this false news – in the form of article sharing and the creation of fake websites – has become big business. The problem is that it’s really very easy to set up a news website. Anyone can do it and it only costs a few dollars. One fake news hotspot is the city of Veles in Macedonia, where the average monthly salary is only 350 euro. Teenagers saw a lucrative opportunity creating and distributing fake news, earning them in excess of 1,750 euro per month. Another fake news hub is the former Soviet republic of Georgia, where college graduates cut and paste material from many different websites, including Canadian-produced satire, and present it on social media as real news. This way, they lure people to their fake websites where they earn money from Google ads. What is probably even more disturbing is the fact that many readers often don’t even read the actual stories; they just react to a cleverly written headline. Some of the fake news website owners have said that their articles are intended as a type of infotainment that should not be taken too seriously. However, millions of people mistake these articles for genuine news.

Article titled “Fake News ‘Epidemic’ Turn Out To Be False” appearing on news website
Because consumers of social media tend to have a ferocious appetite for (fake) news, producing this false news – in the form of article sharing and the creation of fake websites – has become big business.

The consumer just can’t tell the difference anymore

In this digital age, with fake news websites and hacked election campaigns dominating the headlines, how do we distinguish between advertising and opinion? How can we tell what’s real and what’s fake news? And we’re not talking about different viewpoints, but about people deliberately creating fake stories. Sadly, because it is so easy to create what looks like ‘credible content’, we can make up any kind of story and distribute it and it will be eagerly gobbled up by millions of content consumers, the younger generation in particular. Developing software and algorithms to try and combat the problem is one approach, but we should also teach young people how to distinguish between lies and truth when it comes to the content they consume on the Internet. To illustrate the severity of the problem, here’s some statistics:

A recent survey by Buzzfeed indicated that;

  • 75 percent of American readers are fooled by headlines in the news.
  • Respondents who use Facebook as a major news source believed fake news 83 percent of the time.
  • Respondents who use Facebook as a minor news source believed fake news 76 percent of the time.
  • Respondents who didn’t use Facebook as a news source believed fake news 64 percent of the time.

These are disturbing statistics. Another recent study, published by the Stanford History Education Group, is not much more reassuring. Their survey found that:

  • Over 80 percent of US high school students believe anonymous Imgur posts are reliable sources.
  • As much as 80 percent of US middle school students are unable to differentiate between sponsored content and real news.
  • A third of the students surveyed were of the opinion that a fake Fox news account with better quality images was more trustworthy than the real account using lower quality graphics.


A lot can be done and is being done to combat the ‘alternative facts’ epidemic

Growing numbers of programmers, academics, technologists and media experts are trying to solve the increasingly problematic issue of fake news. Four student programmers recently created an open source Chrome browser extension that uses AI to classify content such as links, text and images as unverified or verified, based on the reputation of the website, in comparison to phishing/malware sites, as well as automated searches on Bing and Google. The browser extension is called ‘FIB: Stop living a lie’.

The Trust Project of the Santa Clara University at the Markkula Centre for Applied Ethics is currently developing an online indicator that can determine whether or not a news website is trustworthy. France’s newspaper ‘Le Monde’ is building an open source database of unverified and verified sources. It is also working on an initiative, funded by Google, to spot fake news by querying certain databases. Upworthy founder Eli Pariser started a huge, open Google document to which hundreds of people contribute strategies for combating the spread of fake news. Facebook is working on solutions consisting of tools and partnerships. Measures include options for users to dispute or flag fake stories, and issuing ‘warnings’ before users share a story flagged as fake. Furthermore, they want to penalise fraudulent websites that pose as major publishers or credible news sources. This could eventually lead to only a small group of elite companies being white-labelled as the ultimate representatives of ‘the truth’.

What are some of the possible measures to combat the spreading of fake news?

Many different solutions are proposed by tech companies, researchers, college students, programmers and others, ranging from technology-based measures such as algorithms to more human involvement. Many of these solutions could be used simultaneously.

Source reliability algorithm

Although algorithms are claimed to be free of personal bias, they do inevitably reflect the subjective decisions of their developers. And although they are easier to manage and cheaper than humans, they also need to be transparent. For the moment, we are not quite capable of teaching artificial intelligence how to distinguish between falsehood and truth. We can however teach ranking algorithms to give reliable sources higher priority. One algorithmic model considered for this task is CRH. This model has been taught that the less a piece of content differs from multiple reliable sources, the ‘truer’ it is. This particular algorithm has been tested on weather and stock data and is now also being considered for implementation in more complex situations.

Free e-books for 13 sectors

Editorial judgment by human editors

To ascertain how reliable certain pieces of news content are, social media platforms could employ human editors that use experience and editorial judgment in combination with fact checking. Human judgment is more reliable because it is better able to handle nuances and is less susceptible to gaming and trolling. They are however costly, they are prone to be partial and they are not fast enough to keep up with the speed of social media.

Partnerships with reputable fact-checking sites

News sites that use transparent, reliable fact-checking methodology and correct mistakes could be white-labelled or given more weight. Websites failing independent fact-checks could be penalised or even banned. False articles could also be automatically linked to debunking articles on trusted fact-checking websites such as Snopes, or Politifact.

News vetting through crowd sourcing

Crowdsourced vetting would make assessing fake news similar to the Wikipedia approach. Contributors would apply for a ‘verified content checker’ status and check and rank articles. It would be a more democratic way of fact checking and not as likely to be open to accusations of bias. This could however also attract gamers or trollers getting paid for promoting fake news or linkbait.

Fake news flag option for articles shared on Facebook

Similarly to Facebook’s ‘abuse/spam’ reports, users should have the option to flag or label content as fake. With enough users labelling a news item as false or inaccurate, other readers would see the ‘fake news’ label before clicking on it or sharing it. This method would also improve the accuracy of ‘related articles’ suggestions. This simple measure might lead to more critical consumption and reduce the spread of fake content. It is however also susceptible to gaming and trolling as users could tag real news with ‘fake’ labels.

Separate ‘share’ buttons for personal updates and news

In order to offer a clear distinction and eliminate confusion, some people have suggested separate ‘share’ buttons for personal updates and actual news.

Colour-coded newsfeeds

Colour coding newsfeeds – a different colour for real news, fake news and satire – would provide an instant visual guide for distinguishing between different types of content. This method would however rely on a person making the distinctions. The potential problem could be that any mistakes, whether genuine or alleged, could open the social media site up to being accused of bias.


Some of the fake news combating ideas still have significant flaws and there are various downsides to open collaborative projects such as the one by Upworthy’s Eli Pariser. Mark Zuckerberg has stated that with Facebook’s proposed measures, they will have to walk a fine line between policing the newsfeeds and not infringing on personal opinions and free speech. There clearly is a lot of work to be done when it comes to developing and implementing systems that work. For now, the simplest and best way to fight fake news is to take out paid subscriptions with reputable news companies, and when in doubt, do some fact checking of your own before hitting the share button.

Schedule your free, inspiring session with our expert futurists.


Related updates

This site is registered on as a development site.