Understanding the Impact of Disinformation

A closer look at the problems created by disinformation campaigns

Understanding the Impact of Disinformation

When looking to mitigate the impact of disinformation and prepare societies for the modern information landscape, a number of challenges emerge in identifying the right problems to solve. Top of the list is the “symptoms vs. root cause” conundrum. To illustrate, the Drug Enforcement Administration (DEA) has for half a century approached the narcotics problem in the United States by targeting the supply side of the equation, specifically the growers, manufacturers, and distributors of controlled substances. While this approach may seem logical on the surface, we would argue that it only addresses the symptoms of a much deeper problem: the conditions that drive the demand for narcotics.

Disinformation draws many parallels to narcotics. Following the DEA’s approach, societies can look to stop the production of disinformation through proactive campaigns against bot networks, troll farms, and fake news websites. They can also try to control distribution through the regulation or censorship of social media and other content creation platforms. These initiatives, however, only address the symptoms. Instead, societies should look to answer the question of why disinformation is in such high demand in the first place.

Another challenge is the desire for headlines over actual impact. Using the DEA example again, the organization has been criticized for prioritizing operations that allow it to seize the most money and generate headlines. In the disinformation context, shutting down a troll farm or a fake news site is good publicity, but the impact is questionable at best. When something meeting market demand gets shut down, an alternative will always emerge to fill the void.

With that in mind, we have identified four problems that focus on the demand side of the equation.

Inherent mistrust of information/sources that contradict one’s perspective

This is perhaps the problem most indicative of a post-truth society. When the pursuit of shared truth becomes a zero-sum game, there is only room for one perspective to be right. Nuance and the idea that two things can be true at the same time become intellectual boondoggles. In this environment, we see an accumulation of information around the poles of a debate. The “for” side, for example, will seek and find information to support its side, and vice versa. Whether the information presented is rooted in fact is beside the point.

The result, by the very definition of “zero sum,” is an environment in which one side cannot trust the information presented by the opposing side. Such information is written off as fake news, patently false, or rooted in conspiracy. Any attempt to fact-check or present evidence to the contrary only serves to further entrench the opinion of the side whose information is being questioned. This produces a dangerous endgame: that two or more alternate realities, each supported by its own shared truth, become so impenetrable that all points of relation are lost. We see this manifest itself in the move towards adversarial political analysis on radio, television, and streaming that prioritizes sparring and “hot takes” over respectful intellectual debate.

An argument can be made that the problem is with the information itself; that because the information made available to people is of questionable integrity, it means that the possibility of intellectual debate is impossible. If people only have access to information rooted in fact, the argument goes, the zero-sum game will magically become the pursuit of shared truth. While this may be true, it trivializes the complexity and feasibility of the solution required to ensure the integrity of information.

Declining empathy for contradictory perspectives and ideas

Related to the problem of the mistrust of information presented by opposing sides is the lack of empathy for perspectives and ideas that contradict one’s own. This is the emotional aspect of the zero-sum game. Not only do the sides reject each other’s information, they refuse to take the time to understand where the other side is coming from, or why it believes what it believes.

The importance of empathy should not be overlooked. One side can vehemently disagree with another while still seeking to understand the reasoning behind the opposing side’s position. This is the first step on the road to productive discourse. Without empathy, a debate devolves into ad hominem attacks, emotionally charged arguments, and eventually a desire to avoid discussing a particular topic. At a societal level, this is how polarization becomes entrenched.

A lack of empathy also provokes the desire to try to change the beliefs of the other side. Due to psychological and social reasons, people find it difficult to change their minds. For example, if a person’s affiliation in a group would be impacted by changing their mind, it is unlikely the person will do so. Faced with that prospect, a completely new approach is needed to build bridges. This approach starts with empathy.

Misaligned incentives in the attention economy that make disinformation profitable

The attention economy is not a new phenomenon. Content creators and platforms—be they radio, books, newspapers, or television—needed to people’s attention to earn advertising revenue. With the advent of the internet and social media, that paradigm has not changed. What has changed are the democratization of content creation and attention monetization, the degree to which algorithms control what people see, and the shift from impression-based advertising costs to click-based.

What this has meant is that spreading false information has become an accessible, effective, and profitable enterprise in the attention economy. Creators can easily create an attention-grabbing headline, write engaging content, and share/boost the article widely on social media. As the content generates more clicks, two things happen. The first is an increase in the creator’s share of ad revenue generated by ads embedded in the article. The second is that the social media algorithms further amplify the content’s reach based on its popularity—how often people engage with the content. And when the content is boosted or sponsored, it generates more revenue for social media platforms.

For people getting their information from social media, the prioritization of popular and/or revenue-generating content drastically skews what they see. This often means people see content that confirms what they already believe because they are more likely to click on such content as opposed to content proposing an alternative perspective. In this way, social media platforms are monetizing confirmation bias. Without a realignment of these incentives, social media continues to be a questionable source for information.

Widening disinformation awareness gap

Everyone is a victim of disinformation, whether they are aware of it or not. Disinformation forms the basis of people’s arguments, muddies the information landscape, makes people question their own beliefs, destroys relationships, and erodes social cohesion. It also impacts the outcome of elections and referendums. Even those who consider themselves immune to it are still at the mercy of it. This fact alone speaks to just how easy it is for trained disinformation operators to wreak havoc on a society.

But disinformation is more than just fake news sites generating clickbait headlines or troll farms pumping out antagonistic replies to tweets. Part of the awareness problem stems from people not having an adequate appreciation of what disinformation campaigns look like and what such campaigns are ultimately trying to achieve, especially over the long term. To this end, disinformation is less about the actual information being spread and much more about how society becomes the perpetrator of its own demise.

The US Department of Homeland Security (DHS) sought to demonstrate this concept through its educational War on Pineapple initiative. Specifically, it wanted to show how targeted disinformation and social-media campaigns create controversy and, ultimately, provoke protest over “hot button” issues in American society. The lesson is clear: people need to better understand and appreciate their own role in disinformation campaigns.


The need for societies and governments to find tangible solutions to mitigate the impact of disinformation is pressing. The challenge is figuring out how to approach a problem so complex and multi-faceted. Attempting to shut down the producers or stemming the flow of disinformation seems a fool’s errand aimed at securing headlines instead of impact. Expecting social media companies to self-regulate is also a lot to ask given the obligation to generate profit for shareholders.

At the same time, we should also re-examine the success of existing solutions, especially those that focus on the veracity of information. In post-truth societies, societal divides need to be bridged before we can benefit from things like independent fact checking, gamification, and content moderation. What we should look to do instead is understand why disinformation finds such fertile ground in societies.