Tackling fake news and misinformation in platforms

The online world is increasingly struggling with misinformation, such as fake news, that is spreading in digital platforms. Intentionally as well as unintentionally created and spread false content travels fast in platforms and may reach global audiences instantaneously. To pre-screen, monitor, correct or control the spreading is extremely difficult, and often the remedial response comes only in time to deal with the consequences.

In this signal post we study the problem of misinformation in the platform economy but also list potential solutions to it, with forerunner examples. Defining and establishing clear responsibilities through agreements and regulation is one part of the cure, and technological means such as blockchain, reputation systems, algorithms and AI will also be important. Another essential is to support and empower the users to be aware of the issue and practice source criticism, and this can be done for example by embedding critical thinking skills into educational curricula.

Misinformation − the size of the problem

Fake news or misinformation, in general, is not a new phenomenon, but the online world has provided the means to spread it faster and wider with ease. Individuals, organisations and governments alike can be the source or target audience of misinformation, and fake contents can be created and spread with malicious intentions, by accident or even with the objective of entertaining (for example the news satire organization The Onion).

Digital online platforms are often the place where misinformation is being released and then spread by liking, sharing, information searching, bots, etc. The online environment has not yet been able to adopt means to efficiently battle misinformation, and risks and concerns involved vary from reputation damage to global political crises. The most pessimistic views even warn us of an “infocalypse”, a reality-distorting information apocalypse. Others talk about the erosion of civility as a “negative externality”. This view points out that misinformation could, in fact, be tackled by companies in the platform economy analogously to how negative environmental externalities are tackled by manufacturing companies. It has also been suggested that misinformation is a symptom of deep-rooted systemic problems in our information ecosystem and that such an endemic condition in this complex context cannot be very easily fixed.

Solutions − truth, trust and transparency

Remedies to fake news and misinformation are being developed and implemented, even if designing control and repair measures may seem like a mission impossible. Fake accounts and materials are being removed by social media platforms, and efforts to update traditional journalism values and practices in the platform economy are being initiated. Identification and verification processes are a promising opportunity to improve trust, and blockchain among other technologies may prove pivotal in their implementation.

Example: The Council for Mass Media in Finland has recently launched a label for responsible journalism, which is intended to help the user to distinguish fake content and commercials from responsible and trustworthy journalism. The label is meant for both online and traditional media that comply with the guidelines for journalists as provided by the council.

Algorithms and technical design in general will also have an important part to play in ensuring that platforms provide the foundation and structure that repels misinformation. Taking on these responsibilities also calls for rethinking business models and strategies, as demand for transparency grows. One specific issue is the “filter bubble”, a situation where algorithms selectively isolate users to information that revolves around their viewpoint and block off differing information. Platforms such as Facebook are already adjusting and improving their algorithms and practices regarding, for example, their models for advertising.

Example: Digital media company BuzzFeed has launched an “outside your bubble” feature, which specifically gives the reader suggestions of articles providing differing perspectives compared to the piece of news they just read.

Example: YouTube is planning to address misinformation, specifically by adding “information cues” with links to third party sources when it comes to videos covering hoaxes and conspiracy theories. This way the user will automatically have suggestions to access further and possibly differing information on the topic.

Selected articles and websites

BuzzFeed: He Predicted The 2016 Fake News Crisis. Now He’s Worried About An Information Apocalypse.
BuzzFeed: Helping You See Outside Your Bubble
Engadget: Wikipedia had no idea it would become a YouTube fact checker
Financial Times:  The tech effect, Every reason to think that social media users will become less engaged with the largest platforms
Julkisen sanan neuvosto: Mistä tiedät, että uutinen on totta?
London School of Economics and Political Science: Dealing with the disinformation dilemma: a new agenda for news media
Science: The science of fake news
The Conversation: Social media companies should ditch clickbait, and compete over trustworthiness
The Onion: About The Onion
Wikipedia: Fake news
Wikipedia: Filter bubble

Heidi Auvinen

Research Scientist VTT Technical Research Centre of Finland Ltd
Share this
TwitterFacebookEmailLinkedIn