As online news and social media have proliferated over the last decade, a whole new category of information has entered the popular lexicon: fake news. Online disinformation has come a long way from “On the Internet, nobody knows you're a dog” to a place where a malicious article hatched from a troll farm can sit side-by-side with legitimate, rigorous journalism. Awareness of the issue is widespread, but stopping misinformation requires social media platforms, journalists, fact checkers, and citizens all to take action.
One solution being touted to prevent the spread of disinformation is “media literacy”, but what is it and how effective can it really be?
Media literacy is essentially critical thinking. It may seem redundant to teach media literacy and critical thinking in schools, where presumably these things are taught already, but the UK Commission on Fake News and the Teaching of Critical Literacy Skills, run by the All-Party Parliamentary Group (APPG) on Literacy and the National Literacy Trust, found that only 2% of children have the critical literacy skills they need to tell if a news story is real or fake.
It is clear, especially in markets like the UK, that Facebook wants to see how it can take advantage of the fact that it doesn’t have to play by the same rules as television broadcasters.
If media literacy is essential for navigating online media, then it should be a compulsory part of schooling – which is likely to limit how high literacy standards can be. In Finland, the fact-checking organisation Faktabaari teaches media literacy and fact-checking in schools. The materials are only intended to be used with children up to Grade 9 (15-16 years old), meaning only so much can be taught. Programs that target even younger demographics, such as the proposed BBC My World being produced by Angelina Jolie, are so broad in scope and aimed at such a young demographic that they seem unlikely to make much impact.
If, on the other hand, media literacy is not taught but is voluntary, then it is purely self-selecting and also unlikely to solve the issue. Those interested in learning about and detecting misinformation are probably those who are already least likely to be fooled by it.
While it is encouraging that groups like UNESCO are putting out handbooks for journalists to fight fake news, they are not generally the ones most vulnerable to it. Typically, journalists must already back up their reporting, and work with copy editors and desk editors who will cull information that cannot be trusted.
The core of the issue is that disinformation taps into various psychological realms, such as social identity theory and people’s sense of group belonging or social isolation. One of the ways that people are most vulnerable is on issues on which they believe they are knowledgeable but aren’t.
Studies have questioned to what extent greater media literacy can help fight this type of bias. A study in the Journal of Experimental Psychology found that repeated statements were easier to process, and thus perceived to be more truthful than new statements – even when participants in the study had prior knowledge that the statements were false.
What may be of benefit to the public are broader digital hygiene programmes, such as Facebook’s Digital Literacy Library, which breaks big issues down into smaller parts (for example, individual privacy settings and peer-to-peer behaviour), which is better suited to public advocacy campaigns.
More viable solutions will come from tackling the issues at a higher level. An NPR poll found that the American public consider misleading information to be the greatest threat to keeping elections safe, but that the fix lies more with the media, tech companies, and the government than with the public.
What can the media do to fight misinformation? Organisations like Reuters are putting out guides on how to spot fake news, and The Guardian prominently shows the date when an article was published in its social media thumbnails. This was after they made the dateline more prominent on articles, but still found that Facebook users often see only a shared post, but don't dig deeper and look at the article itself. The change was meant to prevent users from mis-contextualising its reporting.
The line between a news platform and social media platform can also be unclear. Traditional media platforms often strictly regulate political messaging and advertising, whereas social media platforms have far less restriction. While Facebook is the leading social media platform, it also runs a journalism project and acts as a gatekeeper to what news is accessible to its users. To that end, Facebook will continue to see itself as having a role in journalism, but it will not moderate political advertising on its platform.
It is clear, especially in markets like the UK, that Facebook – which has vowed not to take down inflammatory or misleading political adverts – wants to see how it can take advantage of the fact that it doesn’t have to play by the same rules as television broadcasters.
Meanwhile, platforms like Twitter and Spotify have banned political advertising to combat fake news. But these policies also affect smaller political organisations that lack the ability (or financial power) to advertise anywhere but online. Social media platforms are still the cheapest way to advertise.
There certainly won’t be a “one size fits all” solution. It isn’t possible simply to ask citizens not to be victims of misinformation campaigns run by hostile intelligence agencies using manipulative psychological tactics. There is no easy way to make people have greater levels of self-perception and understanding of their own biases. When it comes to issues of free speech and open debate, regional and cultural differences are also relevant, and governments will need to find solutions that will work specifically for them.
Media literacy is a good start, but it’s not the end of the story.