Misunderstanding Misinformation: why most ‘fake news’ regulation is doomed to failure, by Paul Bernal

The regulation of fake news has been fraught with problems from the outset – and it is likely to remain so for the foreseeable future. There are a number of reasons for this, some connected with the nature of the internet and of social media in particular, some with the political climate around the world, some with the nature of ‘news’ – but most associated with some fundamental misunderstandings about misinformation and about the problem that it represents. The extent to which each these misunderstandings can be found in different places and under different jurisdictions is one of the key questions regarding regulation around the world. Trying to reconcile these differences is one of the challenges for lawmakers – and indeed for those interested in comparative law. It will be interesting to see how this is reflected in the blog posts in this series.

Fake News isn’t the problem in itself

The misunderstandings come at a number of levels – and they have implications as to the kinds of regulation that might even have a chance of succeeding in addressing the problem. The first and perhaps the most fundamental misunderstanding is about what the problem actually is. That is, the problem is not really the ‘fake news’ itself, but the effect that the fake news creates – the manipulation of people, often political but not necessarily so, that the fake news is used for. Thinking that the problem is the fake news, that people are misinformed about something, means amongst other things that the focus of regulation tends to be on the content of the fake news, rather than the effect that it has – something done, for example, by the Online Harms White Paper in the UK. This kind of focus means that fake news regulation often seems to end up as a game of whac-a-mole with no end in sight and no impact on the real problem.

People don’t want the truth

The second misunderstanding is the assumption made by those pushing regulation that people actually want to find the truth – where in practice, particularly on the internet, they aren’t really interested in that, but rather in backing up their pre-existing views and ‘winning’ arguments with others. There is empirical evidence to show this – as well as the anecdotal evidence of almost anyone who has spent much time in the political fields of Twitter and similar sites. This, amongst other things, seriously undermines the idea of fact-checking and labelling sources as unreliable or untrustworthy. People care less whether their sources are factual than whether they can be used effectively in an argument.

Fake News or Fake Narratives?

The third misunderstanding, closely related to the first and second, is that it’s the fake news that matters. For manipulative purposes – and in particular for politically manipulative purposes – the specific news doesn’t really matter. What matters, rather, is the narrative. Fake news is used primarily as a way of supporting a fake narrative – and that narrative can be supported by things that are not specifically fake. If, for example, you want to support a fake narrative that crime is committed disproportionately by immigrants, you could publish fake stories about particular immigrants committing crimes of a notably nasty kind, but you could also seek out and publish real stories about immigrants who have committed crimes, but publish them out of context, written in ways that exaggerate the effects and make it look as though all such crimes are committed by immigrants. The specific story would be true, but the narrative you weave around it would be fake. Fact-checkers would not be able to identify the story as fake, but the effect in manipulative terms would be to support the fake narrative, which is what the manipulators are aiming for.

It’s not the trolls and bots, it’s the politicians and the media

The fourth misunderstanding – which may be a deliberate misunderstanding, as the consequences of understanding it are so significant that regulators, lawmakers and others may well be afraid to face up to them – is about who creates, uses and spreads fake news. The lazy (or deliberate) assumption is that the problems with fake news come from trolls or bots, or shady operators in dodgy, unregulated countries – when in practice some of the worst and most important perpetrators are very much closer to home, and very much more in public view: our own politicians and our own ‘mainstream’ or ‘traditional’ media. This is both in terms of the creation of misinformation and in terms of how it is spread and used to manipulate opinion. From Donald Trump to Boris Johnson, the levels and kinds of misinformation created and spread by leading politicians are immense. For example, in the 2019 general election in the UK, 88% of the advertisements by the Conservative Party were found to have been misleading by disinformation tracking organisation First Draft. Leading politicians are also well versed in misinformation techniques. Jacob Rees-Mogg, for example, tweets linking to stories suggesting they say diametrically opposite to what they do, making the assumption that people won’t actually follow the links and find the truth. The mainstream media – and not just the tabloid press – has made the fake narrative a fine art for many years. As Evelyn Waugh put it in his novel about the press, Scoop:

“I read the newspapers with lively interest. It is seldom that they are absolutely, point blank wrong. That is the popular belief, but those who are in the know can usually discern an embryo of truth, a little grit of fact, like the core of a pearl, round which have been deposited the delicate layers of ornament.”

In terms of the spread of disinformation, it isn’t the anonymous trolls and bots but the big accounts with massive followings, the superspreaders of misinformation. Often these are exactly the same politicians and high-profile journalists that have already been mentioned – they play a double role, both creating what is in effect misinformation and spreading others’ misinformation. If something gets retweeted by one of the big players, it will spread like wildfire. If you are a minor creator of fake news, your ideal method is to find a way to get one of these big players to do the spreading for you.

All this means that focusing on the trolls and bots, or seeking out the secret, evil creators of misinformation is bound to fail – or at the very least be a hugely incomplete solution. Excluding mainstream media from regulation – as, for example, has been contemplated for the UK’s Online Harms White Paper – is similarly doomed.

Fake News isn’t New

It is easy, and lazy, to assume that fake news is something new, a phenomenon of our digital, internet age. The opposite is true: fake news has a long and dishonourable history. In chapter 9 of my book The Internet, Warts and All, I set out some of that history from the Middle Ages – including Vlad the Impaler, one of the early victims of fake news in 15th century Wallachia (now part of Romania) – to the current day. Fake news has existed as long as news has existed – and for very much the same reasons as it exists today: primarily financial gain and political manipulation. What is different now, to the extent it is different, is the scale and the delivery methods. Historically, fake news has been created and distributed by the best methods available at the time: for Vlad, pamphlets hand printed and illustrated by woodcuts, for Lord Haw Haw and Hanoi Hannah the radio, for Comical Ali in the Iraq War, TV. Now, that means the internet – and particularly social media.

Social media means that it easier to create fake news than it ever was before – and easier, faster and safer to distribute it to exactly the people that are likely to be influenced by. Stopping the creation is very difficult – it takes seconds to make and post, you can always create more, and if your account is blocked you can quickly create another. Focusing on the creation – and indeed on the creators – is bound to fail, particularly if you are unwilling, unable or afraid to take on the big players in politics and the media. A game of whac-a-mole at best. Instead, the only way to have a chance to deal with misinformation is to look at how it is distributed – and in particular how it is targeted at those people most likely to be influenced by it. That means taking on the social media companies.

Facebook is the biggest problem – but solutions are hard!

The key here is addressing the social media – and not primarily in terms of the content on that social media but in terms of how information is spread on that social media, both through the automated recommendations and ‘tailoring’ and through the networks of friends, followers and people of similar interests. That means addressing the profiling, tailoring and targeting systems on the social media and in particular Facebook. It is the ability of misinformation – both fake news and fake narratives – to be delivered to exactly the people who will be influenced by it that is the problem. This, however, hits at the very heart of the business models of social media companies – and may indeed mean that the only way to have an impact on misinformation is actually to break up the social media companies.

That is a major task – but without it it is highly unlikely that any of the various forms of regulation proposed will have any real effect. Some measures will work to an extent. Algorithmic accountability, for example, might help stopping some forms of fake news from rising to the top of YouTube recommendations or Google searches, but that only deals with one small part of the issue. Making the mainstream media more accountable, or having proper sanctions on politicians for their role in the problem, but both of these are very slippery slopes in terms of freedom of speech, and leaving such regulation in the hands of governments is even more dangerous. In practice, we will have to accept that misinformation is likely to be with us on a significant scale for the foreseeable future. We can chip away at the edges, but we should not delude ourselves into thinking it is a problem that can really be ‘solved’.

Dr Paul Bernal is an Associate Professor in IT, IP and Media Law at UEA Law School, and author of The Internet, Warts and All: Free Speech, Privacy and Truth and Fakebook: why Facebook makes the fake news problem inevitable.