Concern about the proliferation of disinformation, misinformation, and propaganda has reached the point where many governments are proposing new legislation. But the solutions on offer reflect an inadequate understanding of the problem – and could have negative unintended consequences.
This past June, Germany’s parliament adopted a law that includes a provision for fines of up to €50 million on popular sites like Facebook and YouTube, if they fail to remove ‘obviously illegal’ content, such as hate speech and incitements to violence, within 24 hours. Singapore has announced plans to introduce similar legislation next year to tackle ‘fake news’.
In July, the US Congress approved sweeping sanctions against Russia, partly in response to its alleged sponsorship of disinformation campaigns aiming to influence US elections. Dialogue between the US Congress and Facebook, Twitter, and Google has intensified in the last few weeks, as clear evidence of campaign-ad purchases by Russian entities has emerged.
Such action is vital if we are to break the vicious circle of disinformation and political polarisation that undermines democracies’ ability to function. But while these legislative interventions all target digital platforms, they often fail to account for at least six ways in which today’s disinformation and propaganda differ from yesterday’s.
Opening the floodgates
First, there is the democratisation of information creation and distribution. As Rand Waltzman, formerly of the Defence Advanced Research Projects Agency, recently noted, any individual or group can now communicate with – and thereby influence – large numbers of others online. This has its benefits, but it also carries serious risks – beginning with the loss of journalistic standards of excellence, like those typically enforced within established media organisations. Without traditional institutional media gatekeepers, political discourse is no longer based on a common set of facts.
The second feature of the digital information age – a direct by-product of democratisation – is information socialisation. Rather than receiving our information directly from institutional gatekeepers, who, despite often-flawed execution, were fundamentally committed to meeting editorial standards, today we acquire it via peer-to-peer sharing.
Such peer networks may elevate content based on factors like clicks or engagement among friends, rather than accuracy or importance. Moreover, information that is filtered through networks of friends can result in an echo chamber of news that reinforces one’s own biases (though there is considerable uncertainty about how serious a problem this represents). It also means that people who otherwise might consume news in moderation are being inundated with political polemic and debate, including extreme positions and falsehoods, which heighten the risk of misinforming or polarising wider swaths of the public.
Sharing, not caring
The third element of today’s information landscape is atomisation – the divorce of individual news stories from brand or source. Previously, readers could easily distinguish between non-credible sources, like the colourful and sensational tabloids in the checkout line at the supermarket, and credible ones, such as longstanding local or national newspapers.
Now, by contrast, an article shared by a friend or family member from The New York Times may not look all that different than one from a conspiracy theorist’s blog. And, as a recent study from the American Press Institute found, the original source of an article matters less to readers than who in their network shares the link.
The fourth element that must inform the fight against disinformation is anonymity in information creation and distribution. Online news often lacks not only a brand, but also a by-line. This obscures potential conflicts of interest, creates plausible deniability for state actors intervening in foreign information environments, and creates fertile ground for bots to thrive.
One 2015 study found that bots – applications that perform automated tasks – generate around 50 per cent of all web traffic, with as many as 50 million Twitter users and 137 million Facebook users exhibiting non-human behaviours. Of course there are ‘good’ bots, say, providing customer service or real-time weather updates. But there are also plenty of bad actors ‘gaming’ online information systems to promote extreme views and inaccurate information, lending them the appearance of mainstream popularity and acceptance.
Facebook: the propogandist’s lab
Fifthly, today’s information environment is characterised by personalisation. Unlike their print, radio, or even television counterparts, Internet content creators can A/B test (carrying out controlled experiments with two variables) and adapt micro-targeted messages in real-time.
‘By leveraging automated emotional manipulation alongside swarms of bots, Facebook dark posts (unpublished posts), A/B testing, and fake news networks,’ according to a recent exposé, groups like Cambridge Analytica can create personalised, adaptive, and ultimately addictive propaganda. Donald Trump’s campaign was measuring responses to 40-50,000 variants of ads every day, then tailoring and targeting their messaging accordingly.
The final element separating today’s information ecosystem from that of the past, as Stanford law professor Nate Persily has observed, is sovereignty. Unlike television, print, and radio, social-media platforms like Facebook or Twitter are self-regulating – and are not very good at it. It was not until mid-September that Facebook even agreed to disclose information about political campaign ads; it still refuses to offer data on other forms of disinformation.
It is this lack of data that is undermining responses to the proliferation of disinformation and propaganda, not to mention the political polarisation and tribalism that they fuel. Facebook is the chief culprit: with an average of 1.32 billion daily active users, its impact is massive, yet the company refuses to give outside researchers access to the information needed to understand the most fundamental questions at the intersection of the Internet and politics. (Twitter does share data with researchers, but it remains an exception.)
We are living in a brave new world of disinformation. As long as only its purveyors have the data we need to understand it, the responses we craft will remain inadequate. And, to the extent that they are poorly targeted, they may even end up doing more harm than good.