Misinformation

Jing Zeng, Department of Media and Culture Studies, Utrecht University, Netherlands
Scott Babwah Brennen, Center on Technology Policy, University of North Carolina at Chapel Hill, United States

PUBLISHED ON: 09 Nov 2023 DOI: 10.14763/2023.4.1725

Abstract

This article delves into the diverse and complex nature of conceptualising misinformation as an object of research, highlighting the interdisciplinary scholarship in this field that results in varied and sometimes conflicting definitions. While a singular theory of misinformation is neither feasible nor desirable, the article argues for the importance of greater conceptual comprehensiveness in empirical research. Without a comprehensive and comparable definition of misinformation, accurately measuring the problem's scale becomes challenging, potentially leading to underestimation or overhyping of its impact and misguided interventions. Furthermore, addressing the growing demand for countering misinformation in public and policy-making domains necessitates a nuanced understanding of its roots in cultural, sociopolitical, and technological systems. Existing academic discussions on remedies often adopt a Western-centric perspective, overlooking unique power dynamics in non-democratic and non-Western contexts. Therefore, future discussions on countermeasures should prioritise the Global South and other understudied contexts, avoiding one-size-fits-all solutions.
Citation & publishing information
Received: September 22, 2022 Reviewed: March 16, 2023 Published: November 9, 2023
Licence: Creative Commons Attribution 3.0 Germany
Competing interests: The author has declared that no competing interests exist that have influenced the text.
Keywords: Misinformation, Disinformation, Propaganda, conspiracy theory
Citation: Zeng, J. & Brennen, S. B. (2023). Misinformation. Internet Policy Review, 12(4). https://doi.org/10.14763/2023.4.1725

This article belongs to Concepts of the digital society, a special section of Internet Policy Review guest-edited by Christian Katzenbach and Thomas Christian Bächle.

Introduction

In academic communities, there is growing interest in and concern over the proliferation of misinformation. In certain cases, misinformation may be no more than a harmless parody or a light-hearted joke. However, as documented in recent research on misinformation during political events and public health crises, fabricated and inaccurate information can have severe detrimental impacts. On the individual level, misinformation can influence attitudes and decision-making (Lee & Jones-Jang, 2022; Loomba et al., 2021); on a societal level, it can undermine policy-making and compromise social well-being and democracy (Benkler, 2019; Karpf, 2019; Thorson et al., 2018). Given its relevance and potential impact, misinformation as a phenomenon has attracted meaningful debate in academia, public and political discussions alike. One consequence of the hype surrounding misinformation across different social sectors is the inconsistencies and contradictions in defining misinformation as a concept.

One factor contributing to this issue is the presence of numerous related conceptual 'cousins', or ‘conceptual predecessors’ (Anderson, 2021; Mahl et al., 2022). Some of these related terms are more established (e.g. propaganda, rumour, conspiracy theories) than others (e.g. 'fake news'). Scholarly efforts have been taken to delineate conceptual borders between these terms, such as through analysing their context of emergence, content and function (DiFonzo & Bordia, 2007; Guess & Lyons, 2020; Zeng, 2021). For instance, rumour commonly refers to unverified information that emerges when trustworthy or official information is scarce (Allport & Postman, 1947; DiFonzo & Bordia, 2007); Conspiracy theories, on the other hand, are explanations of events or phenomenon with reference to the machination of powerful groups or secret societies (Keeley, 1999; Douglas et al., 2019). Propaganda can be understood as the propagation of information, which is often biased or misleading in nature, with the purpose to manipulate or mobilise a targeted population (Benkler et al., 2018; Born & Edgington, 2017). It is beyond the scope of this article to provide an 'authoritative' framework to differentiate these terms. Instead, this work acknowledges the interconnection between scholarship around these concepts, and therefore considers not only the specific conceptual development of 'misinformation', but also theoretical work from a broad range of literature that scrutinises varied manifestations of the misinformation phenomenon.

As Anderson (2021) cautions, the current scholarship on misinformation should strive to better engage with its conceptual predecessors (e.g. propaganda or conspiracy theories) in order to enhance the articulation of its theoretical and real-world relevance. To synthesise and engage with related literature, this concept paper adopts a comprehensive approach to the concept of 'misinformation', using it as an overarching framework that encompasses various manifestations of misinformation phenomena. It is important to note that, as will be discussed in the following sections, to understand misinformation as an umbrella term for potentially false information is subject to contestation. Rather than asserting its conceptual superiority, the current paper’s inclusive understanding of misinformation is pragmatically employed, as it allows for the incorporation of a wider range of scholarship related to misinformation.

Focusing on the key contentious areas, this article will first provide an overview of how the term 'misinformation' has been defined in the literature. In the subsequent section, the focus will be shifted from defining the concept to the problems associated with misinformation. The conclusion section includes a brief discussion for future misinformation research.

Defining misinformation

The concept of misinformation has deep historical roots. Throughout various epochs, from ancient civilizations to the modern digital age, misinformation has consistently influenced human communication. This includes the distortion of facts in pre-print societies' oral storytelling (Burkhardt, 2017) through to contemporary digital information warfare conducted between states (Karpf, 2019). In academic research, the two World Wars played a pivotal role in shaping early scholarship around the topic. Following World War I, academic research in propaganda studies analysed the techniques employed during the war and their societal impact (Bernays, 1928; Lasswell, 1927). The post-Second World War era witnessed an increased focus on academic research regarding rumours, acknowledging their significant impact on shaping public perceptions and attitudes towards the war effort. This phenomenon garnered particular attention within the field of social psychology, as researchers sought to gain a deeper understanding of the psychological processes involved in the propagation of rumours (Allport & Postman, 1947). Since the late 20th century, the advent of information and communication technologies has greatly propelled contemporary research on misinformation. Within this burgeoning field, scholars have dedicated significant efforts to investigate the intricate role of digital communication technologies in shaping the multifaceted landscape of misinformation (e.g. Marres, 2018; Napoli, 2019; Tufekci, 2018).

Despite the enduring history and the rapidly expanding body of literature around misinformation, what misinformation is remains a subject of ongoing scholarly debate. Here, we present examples of how prior literature has conceptualised and described misinformation.

  • Misinformation is a "claim that contradicts or distorts common understandings of verifiable facts" (Guess & Lyons, 2020, p. 10).
  • Misinformation is "unintentionally promulgated, inaccurate information" (Born & Edgington, 2017, p. 4)
  • "Misinformation—false information—is not a kind of information but rather pseudo-information" (Floridi, 2011, p. 104)
  • "Informing does not require truth, and information need not be true; but misinforming requires falsehood, and misinformation must be false" (Fox, 1983, p. 193)
  • Misinformation is "information whose inaccuracy is unintentional" (Jack, 2017, p. 2)
  • Misinformation includes "claims that do not enjoy universal or near-universal consensus as being true at a particular moment in time on the basis of evidence" (Southwell et al. 2017, p. 369).

As these examples illustrate, the most important defining characteristic of misinformation is its falsehood and inaccuracy. At the same time, disagreement emerges around questions concerning whether misinformation qualifies as information or if misinformation has to be unintentional. This section provides an overview of scholarly discussion of these conceptual elements, drawing on literature from information science, philosophy, sociology, and communication science.

Informativeness

To understand what 'misinformation' should be, it is important to analyse it in relation to 'information'. In the literature of information theory and information philosophy, the relationship between falsity and informativeness and the dichotomy between information and misinformation remain contentious topics. Related questions include whether misinformation is informative, and whether it should be classified as information at all. With cybernetics and mathematical theory of information, the informativeness of a message is measured by entropy, or its ability to reduce uncertainty (Shannon, 1948). Within this context, misinformation is regarded as noise or an error that occurs during transmission, and as such, it lacks informative value. In the realm of philosophy, misinformation is also often disregarded as a valid form of information. For instance, in The Philosophy of Information, Floridi (2013) reasons that information is meaningful, truthful and provides value in making decisions, in which case, misinformation does not qualify as information at all. This view echoes philosophers Dretske's (1983) and Grice's (1989) theories of information, both of which also propose that information needs to encapsulate truthfulness and informativeness, and that therefore misinformation cannot be considered as a form of information.

Others have proposed a more inclusive conceptualisation that perceives information as alethic neutral, meaning that the definition of information is independent of its truth value. For example, according to Fox (1983), "information does not require truth, and information need not be true. But misinformation requires falsehood and misinformation must be false" (p.193). In accordance with such a perspective, information is not necessarily truthful and false or inaccurate information can be informative (Karlova & Fisher, 2013). Scott Lash inverts this, arguing less that false information can be informative, and more than even factual information can be irrational, as the contemporary experience of information is one of “out-of-control bytes of information” that accompany “information overloads, misinformation, disinformation and out-of-control information” (2002, p. 2).

Recognition of the informativeness of misinformation has significant implications for empirical research. This perspective encourages a shift from scientific realism to a more human and culture centric approach to studying (mis)information (Janich, 2006/2018). Rather than considering informativeness as a quality existing out there and waiting to be measured, it is crucial to understand that meaning and informativeness are intricately woven into the cultural fabric of human communication. As Thorson and colleagues (2019) state, “misinformation arises as a function of systems structure, human fallibility and human information needs” (p. 292). Information without truth value can have significant social functions, such as helping individuals cope in uncertain times and crises (DiFonzo & Bordia, 2007; Douglas et al., 2019). Acknowledging the informativeness of misinformation guides our attention to the "informational-agentic" aspect of misinformation (Anderson, 2021, p. 5). Or in other words, rather than reducing misinformation to meritless falsehoods, researchers can scrutinise rationales and impacts behind misinformation transmission. This perspective has become increasingly important for media and communication research. As Giglietto et al. (2019) point out, "focusing on the 'informativeness' of false information allows us to employ journalism studies and literature on information sharing within digital environments to also discuss how the multiple actors of the hybrid media system make judgements and take decisions when exposed to false information" (p. 632).

Intentionality

Another contentious aspect when defining misinformation is intentionality. In recent scholarship, the concept of 'misinformation' has been commonly paired with 'disinformation'. The relationship between these two concepts centres on the intention to deceive. According to the widely accepted definitions (Jack, 2017; Southwell et al., 2019), disinformation refers to information that is intended to deceive. However, with regard to the relationship between disinformation and misinformation, two propositions exist. One camp considers misinformation and disinformation to be mutually exclusive and parallel concepts (Karlova & Fisher, 2013; Hernon, 1995; Hameleers et al., 2020). Accordingly, misinformation is understood as unintentional, or an 'honest mistake' (Chadwick et al., 2018; Hernon, 1995; Hameleers et al., 2020; Fallis, 2009). The other camp proposes an intentionality-neutral definition: misinformation can be, but is not necessarily, unintended. In line with this perspective, disinformation is a subset of misinformation (e.g. Guess & Lyons, 2020; Floridi, 2011; Dan et al., 2021; Paquin et al., 2022).

Conceptually, whether intentionality should be part of the definition of misinformation remains a debatable question. Pragmatically, however, most empirical research of misinformation can benefit from an intentionality-neutral definition (i.e. misinformation does not have to be unintentional). Adhering to a narrow definition of misinformation as solely unintended may inadvertently exclude intriguing cases, as establishing the intentionality of actors in the dissemination of misinformation can be challenging. There are few instances in which researchers can (relatively confidently) claim the source's intentionality to falsify information. Examples include satirical news websites and propaganda campaigns (see the more detailed discussion below).

Falsity

Despite the disagreements around the definition of misinformation discussed above, one conceptual element of misinformation that has been widely agreed upon is its falsity. Different fields work with different theoretical frameworks of 'falsehood' and 'inaccuracy' in information. For instance, in classical mathematical information theory, inaccuracy is related to the noise ratio during signal transmission (Shannon, 1948). In other social science and humanities fields, falsehood and inaccuracy are mostly discussed on the semantic level. Semantic falsity is related to truth claims, but whether and how truth can be established has long been a disputed topic across disciplines. Focusing on misinformation literature, two approaches to falsity can be identified, here we describe them as falsity-as-property, and falsity-as-process.

The first approach highlights the material aspect of falsity and operationalises it as a property of (mis)information that can be detected and measured. Important frameworks have also been proposed to identify and categorise falsehood (Brennen et al., 2020; Cook et al., 2018; Wardle, 2017). Worth noting, however, is that such operationalisation of falsehood requires employing a certain kind of 'ground truth' as a reference point, which is often established in research through identifying the best available evidence and expert consensus (Nyhan & Reifler, 2010; Vraga & Bode, 2020; Tan, Lee, & Chae, 2015). The feasibility of identifying the best available evidence and expert views varies from topic to topic. For instance, claims under scrutiny in present-day political discussions are seldom beyond challenge. As stated by Kuklinski and colleagues (1998, p.145), factual assertions regarding public policy come into being during the political process, rather than exist prior to it. For this reason, contemporary practices of arbitrating the truth of political claims have been criticised for encouraging an oversimplified understanding of complex issues (Uscinski & Butler, 2013). However, it is important to note that this understanding of political facts should not be misinterpreted as truth relativism or epistemological scepticism. Instead, it serves as a reminder to exercise caution when examining political claims in situations where facts are not so black and white. Scientific truth claims can also be difficult to define. As shown in the COVID-19 pandemic, during fast-developing events scientific evidence can be scarce and expert consensus evolves. Due to the fluidity of evidence in science (Krause et al., 2022), the assessment of its truth value needs to be situated at a particular moment in time (Southwell et al., 2022; Tan et al., 2015).

Alongside the falsity-as-property perspective, another prominent approach focuses on the process by which misinformation comes into being or how truth claims are defined. Such a perspective is particularly promoted by the postmodern critique of truth and knowledge. To this, Foucault's truth and power diagram is a case in point. Foucault's (1980) theorisation of Regime of Truth illuminates the importance of discourse and power contestation in shaping how and why certain information is legitimised as 'truth' while others are sanctioned as 'falsehood'. On the one hand, we acknowledge that the postmodernist impossibility of truth, or a complete rejection of objective truth, has limited merit for many research agendas in contemporary misinformation studies. For example, research efforts to develop debunking strategies, misinformation detection, and understanding its impacts, necessitate a dedication to evaluation and evidence-based analysis for the purpose of distinguishing between accurate information and falsehoods. On the other hand, we echo Marres’ (2018) cautionary note regarding the risks associated with normative demarcation around misinformation and argue that a theoretical lens emphasising the contentious aspects of the falsity/truth dichotomy can be productive. It encourages a non-normative perspective and directs greater attention to the technological and societal contexts in which problematic information emerges. In today's online space, as we will discuss further in the following sections, truth validation and selection are influenced by competing interests between knowledge communities online, as well as by contradicting subjectivities of human and non-human actors (Marres, 2018). Recent years have seen a growing trend of instrumentalising misinformation-related terms as a rhetorical weapon (Egelhofer & Lecheler, 2019). In authoritarian countries where both journalistic and scientific expertise serve as political apparatus and where 'misinformation' is used to prosecute dissidents (Rahimi, 2011; Yadav et al., 2021; Zeng et al., 2017). To problematise misinformation in these aforementioned contexts requires our attention to be shifted from the face value of 'falsehood' to revealing the power dynamics that permeate false arbitration and truth claims.

Defining the problems of misinformation

Above we discussed how scholars have defined misinformation; here we consider how scholars have defined the problems of misinformation. Synthesising a sizable body of literature, we examine some of the ways that scholars have singled out the predominant issue of concern regarding misinformation. By problem of misinformation, we do not necessarily mean the origins or root causes of misinformation, but rather the dominant factor in what makes misinformation a public problem. For clarity, we group these into three categories: problems of audiences; problems of intermediaries; and problems of producers of misinformation. These three dimensions broadly represent the three key moments in the lifecycle of a piece of misinformation: aligning with the longstanding categories of production, transmission, and reception. These three also represent Chadwick and Stanyer’s (2022) first three variables of their 10-variable typology for studying deception: attributes and actions of deceptive entities, media-technological design factors and their affordances for attitude formation and action, and attributes and actions of the deceived. Yet, arguably, each of the remaining seven variables can be tied back to one of these three or the interaction between them: for example “undermining the social interests of the deceived” is a dimension of audiences while “credibility of source and message” lies in the interaction between all three.

While a certain study may focus more on a particular framing of misinformation, there is no reason to think any of these are mutually exclusive. We broadly follow Philips & Milner (2021) calling for a more ecological understanding that recognises the diversity of forces shaping the problems of misinformation. However, categorising some of the key root causes can help us better identify and direct potential interventions.

Audiences

For many, the problems of misinformation are framed as deficiencies or practices of audiences. We recognise at least four different ways that scholars have located the root causes of misinformation in the audience.

Inattention: Recent experimental and observational studies have found that in many instances, people share and potentially believe falsehoods largely due to inattention or carelessness (see e.g. Pennycook & Rand, 2021). We all encounter and ingest a huge amount of content each day, much of which we do not examine critically. Instead, we often rely on simple heuristics to assess it. In a sense, this approach echoes frequent invocations of the 'attention economy' (see e.g. Simon, 1971), an idea that has found much purchase in describing how social media platforms work to monopolise and monetise our attention (Tufekci, 2013). The centrality of inattention in the problem of misinformation suggests a set of solutions geared toward increasing the attention of users onto the facticity of content. For example, many have been considering the potential of 'nudges', simple messages that remind audiences to read or consider the content before they share it, or broad exhortations about the importance of truth (Pennycook et al., 2020). Others have suggested adding friction to social media, to slow down one's ability to share content (Agapie et al., 2013; Caraban et al., 2019). These 'content-neutral' interventions have been bolstered by the hope that they will be seen as 'non-partisan', and if codified into law, would not run into serious First Amendment challenges in the US (Social Media NUDGE Act, 2022).

Psychological issues: Beyond inattention, scholars have looked to other psychological issues to explain why people believe and/or share misinformation. Most importantly, we have noted that people tend to share or believe content that aligns with or confirms existing opinions or beliefs, especially when that is politically balanced content (Flynn, Nyhan, & Neifler, 2017; Osmundsen et al., 2021). Similarly, some have found that content that evokes strong emotions or feelings is more likely to be shared (Valenzuela et al., 2017). Examining belief in conspiracy theories specifically, some have found indication that belief in conspiracy theories is correlated with feelings of existential or social powerlessness (Douglas et al., 2019; Uscinski & Parent, 2014), need for positive self-image (Cichocka, Marchlewsk, & Golec de Zavala, 2016), or need to "preserve beliefs in the face of uncertainty and contradiction" (Douglas et al., 2019, p. 7). That is, for some, conspiracy theories elide the chaos of the world; there is a comfort in believing there is order to things—even if it is maleficent. But perhaps one of the strongest predictors of belief in one conspiracy theory is belief in other conspiracy theories. This has led some to postulate that there is a predisposition to believing conspiracy theories, or a 'conspiracy mindset' (Sutton & Douglas, 2020). Indeed, decades ago, Richard Hofstadter (1962/2012) famously recognised the 'Paranoid Style' of American politics, a framework people have used to make sense of a range of phenomena from Watergate and McCarthyism, to the rise of Trump (Hart, 2020).

Epistemology: Beyond psychological dynamics, some have identified epistemological deficiencies as a root cause for misinformation: issues with how users find and assess information especially online. Some have focused on online search or 'research' (Tripodi, 2018), behaviours, or looked more broadly at verification strategies (Schwarzenegger, 2019; Flintham et al., 2018). Expanding this focus on epistemology to the social level, others have ascribed the problems of misinformation to a broad national shift in the treatment of evidence and facts. Much of this work has drawn implicitly or explicitly on Foucauldian 'regimes of truth' (1980) or Jasanoff's 'civic epistemologies' (2004). The implication is that in recent years, the US has seen a broad shift both in the discourse around and mechanisms of how truth/falsehood is established in society. At the same time, this aligns with the long-recognised tactic by authoritarians to undermine the public's ability to sort truth from falsehoods (Arendt, 1951). Relatedly, many have ascribed the spread of misinformation, at least in part, to a broad reduction in trust in institutions. Public opinion surveys have tracked such a decline in trust across institutions for decades, and now, truth in most institutions is at historic lows (Pew, 2022). This work recognises that trust plays an essential role in public knowledge production: whether that is trusting the government to provide accurate information, trusting scientists to accurately describe their unique access to and understanding of the natural world, or trusting media to accurately and objectively describe the world.

Partisanship and identity: As noted above, many have observed that audiences are more likely to accept, believe, or share content when it aligns with their political beliefs, even if it is false. Ideological asymmetry—where those on the right produce and consume more false content—has been one of the most consistent findings in disinformation studies (Freelon et al., 2020). While some have described this more as individuals being 'duped' by political actors, others see this more as a function of partisan identity. Increasingly, scholarship acknowledges that identity is at the heart of how audiences engage with misinformation. For example, Osmundsen, Petersen, and Bor note that "sharing of false news has less to do with ignorance than with partisan political affiliation and the news available to partisans for use in denigrating their opponents" (2021, n.p.). Audiences embrace falsehoods when they are meaningful. Hochschild's (2016) concept of 'deep stories', which Philips and Milner (2021) adapt as 'deep memetic frames', both describe the persistent underlying narrative infrastructures of how we understand the world. As Philips and Milner (2021) put, these infrastructures "shape our realities, and by extension our actions, so thoroughly and so seamlessly that the people peering out from behind them likely have no idea the frames can even exist. This is just how the world is; the epistemological equivalent of breathing" (p. 19, emphasis in original). For many, truth or falsity is less important than the way that content ties in or resonates with their infrastructures of meaning. To understand how misinformation interplay with such narrative infrastructures, we must look beyond its facticity, its strict truth or falseness, and rather consider how it can be made useful or meaningful.

Misinformation as a problem of intermediaries

Rather than framing misinformation in terms of audiences, many scholars focus on how it is transmitted, mediated, and/or amplified.

Social media: Acknowledging that misinformation has always existed, many note that the problems we face now are directly facilitated by and tied to online platforms (Benkler et al., 2018; Howard, 2020; Zeng & Schäfer, 2021). Researchers have identified a handful of design features of platforms that are particularly culpable in facilitating misinformation. (1) First, the incentive structure of social media can be related to the spread of false and extreme content (Vaidhyanathan, 2018). Platforms' like and share features, for instance, ultimately reward emotional or sensational content by reliably attracting engagement, regardless of its accuracy. Zuckerberg himself acknowledged (2018) that more extreme content usually had more reach and engagement—that as content got closer to violating community standards, it reliably had more engagement, (2) Second, varied monetisation opportunities afforded by online platforms allow content creators to leverage their influence into financial opportunities. This incentivises profit-driven content creators to build large audiences; some do so by sharing problematic content (Center for Countering Digital Hate, 2021; Mahl et al., 2023). (3) There is also concern that the problem of misinformation online can be further exacerbated by digital platforms' algorithmic recommendation, which can promote false or problematic content to drive more engagement and keep users on the platforms (Napoli, 2019; Tufekci, 2018). YouTube's recommendation algorithm is a case in point. Despite the platform's attempts to moderate videos spreading misinformation, recent research shows that its recommendation system continues amplifying conspiratorial and pseudoscientific videos (Papadamou et al., 2022; Tang et al., 2021). Social media's algorithmic manipulation of misinformation is evident in their practice of micro-targeting, which involves tailoring advertisements to individuals through harnessing personal demographic and behavioural data. For example, previous studies have also raised concerns about the integration of micro-targeting in political misinformation campaigns (Dobber et al., 2019; Ribeiro et al., 2019).

News: Others have pointed to the role that news media has played in allowing or facilitating the spread of false content. Broadly speaking, scholars focusing on journalism in the West have drawn at least three distinct connections between news media and the rise of misinformation. (1) The circulation and advertising revenue of print journalism have declined since at least the 1950s (Nielsen & Fletcher, 2020), and this decline has been further exacerbated by the rise of digital media. In response, many newsrooms have recently seen dramatic reductions in revenues, responding with layoffs, closures, and sales (Nielsen, 2016). To cut costs, many digital outlets have limited reporting, asking journalists to spend more time rewriting content from press releases or other outlets, rather than complete their own reporting (Brennen, 2020). Together, these changes mean that we have less independent, high-quality news being produced; misinformation often fills in the information or news vacuum (Brennen et al., 2020). (2) Drawing a slightly different connection between the structure of news and misinformation, Benkler and colleagues (2018) demonstrate how news content and framing now regularly flow from small fringe digital outlets to major right-wing news outlets. This has the effect of amplifying or promoting extreme or problematic content, including misinformation. (3) Relatedly, others have considered the way reporting practises themselves have encouraged or facilitated misinformation. For example, some have noted that journalists' continued commitment to a form of 'objectivity' grounded in simply presenting 'both sides' of a disagreement means journalists continue to promote problematic and/or unsupported claims, such as in global warming reports (Boykoff & Boykoff, 2004). Similarly, Berry and Sobieraj (2012) detail the rise of outrage as a style in news, largely as a result of regulatory change, media consolidation, and the rise of talk radio and cable news. Outrage, aimed at generating an emotional response in the audience, often lends itself to falsehoods.

Misinformation as a problem of producers

Finally, for some misinformation is best described or framed as a problem of malicious actors intentionally creating false content to gain or consolidate power or money.

Foreign influence campaigns: A great deal of scholarship has tracked how foreign governments often spread misinformation through savvy influence operations around the world (Bradshaw, Bailey, & Howard, 2020). Most famously, the Russian government, largely through the Internet Research Agency (IRA), spread false and extreme content before and during the 2016 US Presidential election (Jamieson, 2018). However, other governments, including China and Iran have also spread false content to interfere with internal politics (Bradshaw, Bailey, & Howard, 2020). While many of these operations are intended to influence elections, there is indication that they are also intended to undermine democratic norms and trust in democratic institutions. For example, in the 2016 election, while the IRA broadly appears to have supported Trump's election, they also worked to amplify and exacerbate existing political, social, or racial divisions (Howard et al., 2019)

Political power: While foreign influence operations have received a great deal of attention, scholars are increasingly considering how domestic actors spread falsehoods in pursuit of political or social power. While this most notably includes domestic politicians spreading lies to gain political power, it also can involve influencers working to build large audiences. Also, often these domestic disinformation producers "strategically target and exploit identity-based differences in accord with pre-existing power structures to maintain hegemonic social orders" (Reddi et al., 2021, p. 1)

Financial benefits: Finally, some analyses have focused on the financial benefits of spreading misinformation. These benefits can be either more immediate or more long-term. For example, for decades, a well-organised network of climate deniers, funded by those with ties to the oil and gas industry, have produced and circulated lies to forestall climate regulation (Oreskes & Conway, 2010). Others spread falsehoods in pursuit of more immediate financial gain. Famously, a group of Macedonian teenagers in the 2016 election, created and spread misinformation to make money through monetisation on social media and ad sales (Subramanian, 2017). But anti-vax, alt-medicine, and lifestyle-based misinformation have become big business: producers can sell products, seminars, memberships, books, etc (Center for Countering Digital Hate, 2021; Mahl et al., 2023)

Conclusion

This article has illustrated the diversity and complexity by which 'misinformation' can be conceptualised as a research subject. Given the interdisciplinary nature of the scholarship, the diverse, and even conflictual, conceptualisations of misinformation are hardly surprising. For instance, as previously discussed, some only consider unintentional false claims as misinformation, but others take an intentionality-neutral definition. Furthermore, how falsity, the most central defining element of misinformation, can be operationalised in empirical research remains disputed in academic work. Such inconsistency makes transparency particularly important in misinformation research. For instance, researchers need to be open and clear about the criteria used to label misinformation, as well as assumptions made about the state of expert consensus and evidence on the topic under study (Vraga & Bode, 2020).

Furthermore, although a unitary theorisation of misinformation is neither feasible nor desirable, empirical research of misinformation can benefit from more conceptual precision and comprehensiveness. As pointed out by numerous scholars, misinformation and its related terms are often used interchangeably or vaguely (Cacciatore, 2021; DiFonzo & Bordia, 2007; Mahl et al., 2021). The quick expansion in scholarship and public interests in the topic requires more efforts to develop a cohesive conceptualisation. Without a comprehensive and comparable definition of misinformation as the foundation, it is challenging to accumulate and reconcile evidence from research to measure the scale of problems related to misinformation we are facing today. The consequence would be either underestimating or overhyping the impacts of the misinformation in public discourse, which in turn misguides interventions.

In both the public and policy-making domains, there is a growing demand for countering misinformation. However, as our above discussion suggests, misinformation as a social phenomenon and problem has its roots in broad cultural, sociopolitical and technological systems. For this reason, countermeasures should be based on a nuanced and contextualised understanding of why misinformation arises. Existing academic discussion of remedies to misinformation, to a large extent, takes a Euro-American stance. Non-democratic and non-Western contexts often present unique power dynamics between journalism, politics and science. Against such a background, falsehood cannot always be identified or refuted by approaches discussed in Euro-American countries, such as fact-checking. As mentioned earlier, 'misinformation' or 'fact-checking' themselves can be utilised as discursive tactics to silence dissident voices. In future discussions of countermeasures for misinformation, more importance needs to be attached to the Global South and understudied peripheral contexts. Any attempts to propose one-size-fits-all solutions should be avoided.

References

Agapie, E., Golovchinsky, G., & Qvarfordt, P. (2013). Leading people to longer queries. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 3019–3022. https://doi.org/10.1145/2470654.2481418

Allport, G. W., & Postman, L. J. (1945). The basic psychology of rumor. Transactions of the New York Academy of Sciences, 8(2 Series II), 61–81. https://doi.org/10.1111/j.2164-0947.1945.tb00216.x

Anderson, C. W. (2021). Propaganda, misinformation, and histories of media techniques. Harvard Kennedy School Misinformation Review, 2021(2), 1–7. https://doi.org/10.37016/mr-2020-64

Baptista, J. P., & Gradim, A. (2020). Understanding fake news consumption: A review. Social Sciences, 9(10), Article 185. https://doi.org/10.3390/socsci9100185

Benkler, Y. (2019). Cautionary notes on disinformation and the origins of distrust (pp. 1–11) [Research article]. Social Science Research Council, Mediawell. https://doi.org/10.35650/MD.2004.d.2019

Benkler, Y., Faris, R., & Roberts, H. (2018). Network propaganda: Manipulation, disinformation, and radicalization in American politics. Oxford University Press. https://doi.org/10.1093/oso/9780190923624.001.0001

Bernays, E. L. (1928). Propaganda. Horace Liveright.

Berry, J. M., & Sobieraj, S. (2013). The outrage industry: Political opinion media and the new incivility. Oxford University Press.

Born, K., & Edgington, N. (2017). Analysis of philanthropic opportunities to mitigate the disinformation/propaganda problem [Report]. William and Flora Hewlett Foundation. https://www.hewlett.org/wp-content/uploads/2017/11/Hewlett-Disinformation-Propaganda-Report.pdf

Boykoff, M. T., & Boykoff, J. M. (2004). Balance as bias: Global warming and the US prestige press. Global Environmental Change, 14(2), 125–136. https://doi.org/10.1016/j.gloenvcha.2003.10.001

Brennen, J. S. (2020). Formulating deformation: The flows of formless information. International Journal of Communication, 14, 4578–4598. https://ijoc.org/index.php/ijoc/article/view/12744

Brennen, J. S., Simon, F. M., Howard, P. N., & Nielsen, R. K. (2020). Types, sources, and claims of COVID-19 misinformation [Factsheet]. Reuters Institute for the Study of Journalism. https://www.oxfordmartin.ox.ac.uk/publications/types-sources-and-claims-of-covid-19-misinformation/

Burkhardt, J. M. (2017). History of fake news. Library Technology Reports, 53(8), 5–9. https://journals.ala.org/index.php/ltr/article/view/6497

Cacciatore, M. A. (2021). Misinformation and public opinion of science and health: Approaches, findings, and future directions. Proceedings of the National Academy of Sciences, 118(15), Article e1912437117. https://doi.org/10.1073/pnas.1912437117

Caraban, A., Karapanos, E., Gonçalves, D., & Campos, P. (2019). 23 ways to nudge: A review of technology-mediated nudging in human-computer interaction. Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, 1–15. https://doi.org/10.1145/3290605.3300733

Center for Countering Digital Hate. (2021). The disinformation dozen: Why platforms must act on twelve leading online anti-vaxxers (pp. 1–40) [Report]. Center for Countering Digital Hate. https://counterhate.com/research/the-disinformation-dozen/

Chadwick, A., & Stanyer, J. (2022). Deception as a bridging concept in the study of disinformation, misinformation, and misperceptions: Toward a holistic framework. Communication Theory, 32(1), 1–24. https://doi.org/10.1093/ct/qtab019

Cook, J., Ellerton, P., & Kinkead, D. (2018). Deconstructing climate misinformation to identify reasoning errors. Environmental Research Letters, 13(2), 1–7. https://doi.org/10.1088/1748-9326/aaa49f

DiFonzo, N., & Bordia, P. (2007). Rumor, gossip and urban legends. Diogenes, 54(1), 19–35. https://doi.org/10.1177/0392192107073433

Dobber, T., Ó Fathaigh, R., & Zuiderveen Borgesius, F. J. (2019). The regulation of online political micro-targeting in Europe. Internet Policy Review, 8(4). https://doi.org/10.14763/2019.4.1440

Douglas, K. M., Uscinski, J. E., Sutton, R. M., Cichocka, A., Nefes, T., Ang, C. S., & Deravi, F. (2019). Understanding conspiracy theories. Political Psychology, 40, 3–35. https://doi.org/10.1111/pops.12568

Egelhofer, J. L., Aaldering, L., Eberl, J. M., Galyga, S., & Lecheler, S. (2020). From novelty to normalization? How journalists use the term ‘fake news’ in their reporting. Journalism Studies, 21(10), 1323–1343. https://doi.org/10.1080/1461670X.2020.1745667

Fletcher, R., & Nielsen, R. K. (2018). Automated serendipity: The effect of using search engines on news repertoire balance and diversity. Digital Journalism, 6(8), 976–989. https://doi.org/10.1080/21670811.2018.1502045

Flynn, D. J., Nyhan, B., & Reifler, J. (2017). The nature and origins of misperceptions: Understanding false and unsupported beliefs about politics. Political Psychology, 38(51), 127–150. https://doi.org/10.1111/pops.12394

Foucault, M. (1980). Power/knowledge: Selected interviews and other writings, 1972-1977 (C. Gordon, Ed.; C. Gordon, L. Marshal, J. Mepham, & K. Sober, Trans.). Pantheon.

Gillespie, T. (2020). Content moderation, AI, and the question of scale. Big Data & Society, 7(2), 1–5. https://doi.org/10.1177/2053951720943234

Hart, R. P. (2020). Donald Trump and the return of the paranoid style. Presidential Studies Quarterly, 50(2), 348–365. https://doi.org/10.1111/psq.12637

Hofstadter, R. (2012). The paranoid style in American politics. Vintage. (Original work published 1962).

Howard, P. N., Woolley, S., & Calo, R. (2018). Algorithms, bots, and political communication in the US 2016 election: The challenge of automated political communication for election law and administration. Journal of Information Technology & Politics, 15(2), 81–93. https://doi.org/10.1080/19331681.2018.1448735

Jack, C. (2017). Lexicon of lies: Terms for problematic information (pp. 1–20) [Guide]. Data & Society Research Institute. https://apo.org.au/sites/default/files/resource-files/2017-08/apo-nid183786.pdf

Janich, P. (2018). What is information? (E. Hayot & L. Pao, Trans.; Vol. 55). University of Minnesota Press. (Original work published 2006).

Jasanoff, S. (Ed.). (2004). States of knowledge: The co-production of science and the social order. Routledge.

Karpf, D. (2019). On digital disinformation and democratic myths [Research article]. Social Science Research Council, Mediawell. https://doi.org/10.35650/MD.2012.d.2019

Keeley, B. L. (1999). Of conspiracy theories. The Journal of Philosophy, 96(3), 109–126. https://doi.org/10.2307/2564659

Social media NUDGE act, S.3608, 117th Congress (2021-2022) (2022). https://www.congress.gov/bill/117th-congress/senate-bill/3608?s=1&r=4

Krause, N. M., Freiling, I., & Scheufele, D. A. (2022). The “infodemic” infodemic: Toward a more nuanced understanding of truth-claims and the need for (not) combatting misinformation. The ANNALS of the American Academy of Political and Social Science, 700(1), 112–123. https://doi.org/10.1177/00027162221086263

Kuklinski, J. H., Quirk, P. J., Schwieder, D. W., & Rich, R. F. (1998). ‘Just the facts, ma’am’: Political facts and public opinion. The Annals of the American Academy of Political and Social Science, 560(1), 143–154. https://doi.org/10.1177/0002716298560001011

Lash, S. (2002). Critique of information. SAGE Publications Ltd. https://doi.org/10.4135/9781446217283

Lasswell, H. D. (1927). The theory of political propaganda. American Political Science Review, 21(3), 627–631. https://doi.org/10.2307/1945515

Lee, S., & Jones-Jang, S. M. (2022). Cynical nonpartisans: The role of misinformation in political cynicism during the 2020 US presidential election. New Media & Society. https://doi.org/10.1177/14614448221116036

Loomba, S., de Figueiredo, A., Piatek, S. J., de Graaf, K., & Larson, H. J. (2021). Measuring the impact of COVID-19 vaccine misinformation on vaccination intent in the UK and USA. Nature Human Behaviour, 5, 337–348. https://doi.org/10.1038/s41562-021-01056-1

Mahl, D., Schäfer, M. S., & Zeng, J. (2022). Conspiracy theories in online environments: An interdisciplinary literature review and agenda for future research. New Media & Society, 2(7), 1781–1801. https://doi.org/10.1177/14614448221075759

Mahl, D., Zeng, J., & Schäfer, M. S. (2023). Conceptualizing platformed conspiracism: Analytical framework and empirical case study of BitChute and Gab. New Media & Society, 1–20. https://doi.org/10.1177/14614448231160457

Marres, N. (2018). Why we can’t have our facts back. Engaging Science, Technology, and Society, 4, 423–443. https://doi.org/10.17351/ests2018.188

Napoli, P. M. (2019). Social media and the public interest: Media regulation in the disinformation age. Columbia University Press.

Nyhan, B., & Reifler, J. (2010). When corrections fail: The persistence of political misperceptions. Political Behavior, 32, 303–330. https://doi.org/10.1007/s11109-010-9112-2

Oreskes, N., & Conway, E. M. (2010). Merchants of doubt: How a handful of scientists obscured the truth on issues from tobacco smoke to climate change. Bloomsbury Publishing.

Osmundsen, M., Bor, A., Vahlstrup, P. B., Bechmann, A., & Petersen, M. B. (2021). Partisan polarization is the primary psychological motivation behind political fake news sharing on Twitter. American Political Science Review, 115(3), 999–1015. https://doi.org/10.1017/S0003055421000290

Papadamou, K., Zannettou, S., Blackburn, J., De Cristofaro, E., Stringhini, G., & Sirivianos, M. (2022). ‘It is just a flu’: Assessing the effect of watch history on YouTube’s pseudoscientific video recommendations. Proceedings of the Sixteenth International AAAI Conference on Web and Social Media, 16, 723–734. https://doi.org/10.1609/icwsm.v16i1.19329

Pariser, E. (2011). The filter bubble: How the new personalized web is changing what we read and how we think. Penguin Books.

Pennycook, G., McPhetres, J., Zhang, Y., Lu, J. G., & Rand, D. G. (2020). Fighting COVID-19 misinformation on social media: Experimental evidence for a scalable accuracy-nudge intervention. Psychological science, 31(7), 770–780. https://doi.org/10.1177/0956797620939054

Pennycook, G., & Rand, D. G. (2021). The psychology of fake news. Trends in Cognitive Sciences, 25(5), 388–402. https://doi.org/10.1016/j.tics.2021.02.007

Phillips, W., & Milner, R. M. (2021). You are here: A field guide for navigating polarized speech, conspiracy theories, and our polluted media landscape. MIT Press. https://doi.org/10.7551/mitpress/12436.001.0001

Rahimi, B. (2011). The agonistic social media: Cyberspace in the formation of dissent and consolidation of state power in postelection Iran. The Communication Review, 14(3), 158–178. https://doi.org/10.1080/10714421.2011.597240

Ribeiro, F. N., Saha, K., Babaei, M., Henrique, L., Messias, J., Benevenuto, F., Goga, O., Gummadi, K. P., & Redmiles, E. M. (2019). On microtargeting socially divisive ads: A case study of Russia-linked ad campaigns on Facebook. Proceedings of the Conference on Fairness, Accountability, and Transparency, 140–149. https://doi.org/10.1145/3287560.3287580

Ross Arguedas, A., Robertson, C. T., Fletcher, R., & Nielsen, R. K. (2022). Echo chambers, filter bubbles, and polarisation: A literature review [Literature review]. Reuters Institute for the Study of Journalism. https://doi.org/10.60625/risj-etxj-7k60

Russell Hochschild, A. (2016). Strangers in their own land: Anger and mourning on the American right. The New York Press.

Shannon, C. E. (1948). A mathematical theory of communication. Bell System Technical Journal, 27(3), 379–423. https://doi.org/10.1002/j.1538-7305.1948.tb01338.x

Simon, H. A., Deutsch, K. W., Shubik, M., & Daddario, E. Q. (1971). Designing organizations for an information-rich world. In M. Greenberger (Ed.), Computers, communications, and the public interest (Vol. 72, pp. 37–72). Johns Hopkins Press.

Southwell, B. G., Brennen, J. S. B., Paquin, R., Boudewyns, V., & Zeng, J. (2022). Defining and measuring scientific misinformation. The ANNALS of the American Academy of Political and Social Science, 700(1), 98–111. https://doi.org/10.1177/00027162221084709

Southwell, B. G., Thorson, E. A., & Sheble, L. (2017, December). The persistence and peril of misinformation. American Scientist, 105(6), 372–375.

Sutton, R. M., & Douglas, K. M. (2020). Conspiracy theories and the conspiracy mindset: Implications for political ideology. Current Opinion in Behavioral Sciences, 34, 118–122. https://doi.org/10.1016/j.cobeha.2020.02.015

Tan, A. S. L., Lee, C., & Chae, J. (2015). Exposure to health (mis)information: Lagged effects on young adults’ health behaviors and potential pathways. Journal of Communication, 65(4), 674–698. https://doi.org/10.1111/jcom.12163

Tang, L., Fujimoto, K., Amith, M. T., Cunningham, R., Costantini, R. A., York, F., Xiong, G., Boom, J. A., & Tao, C. (2021). ‘Down the rabbit hole’ of vaccine misinformation on YouTube: Network exposure study. Journal of Medical Internet Research, 23(1), Article 23262. https://doi.org/10.2196/23262

Thorson, E. A., Sheble, L., & Southwell, B. G. (2018). Conclusion: An agenda for misinformation research. In B. G. Southwell, E. A. Thorson, & L. Sheble (Eds.), Misinformation and mass audiences (pp. 289–294). University of Texas Press.

Tufekci, Z. (2013). “Not this one”: Social movements, the attention economy, and microcelebrity networked activism. American Behavioral Scientist, 57(7), 848–870. https://doi.org/10.1177/0002764213479369

Tufekci, Z. (2018, March 10). YouTube, the great radicalizer. The New York Times. https://www.nytimes.com/2018/03/10/opinion/sunday/youtube-politics-radical.html

Uscinski, J. E., & Butler, R. W. (2013). The epistemology of fact checking. Critical Review, 25(2), 162–180. https://doi.org/10.1080/08913811.2013.843872

Vaidhyanathan, S. (2018). Antisocial media: How Facebook disconnects us and undermines democracy. Oxford University Press. https://doi.org/10.1093/oso/9780190056544.001.0001

Valenzuela, S., Piña, M., & Ramírez, J. (2017). Behavioral effects of framing on social media users: How conflict, economic, human interest, and morality frames drive news sharing. Journal of Communication, 67(5), 803–826. https://doi.org/10.1111/jcom.12325

Vraga, E. K., & Bode, L. (2020). Defining misinformation and understanding its bounded nature: Using expertise and evidence for describing misinformation. Political Communication, 37(1), 136–144. https://doi.org/10.1080/10584609.2020.1716500

Wardle, C. (2017, February 16). Fake news. It’s complicated. First Draft News. https://firstdraftnews.org/articles/fake-news-complicated/

Yadav, K., Erdoğdu, U., Siwakoti, S., Shapiro, J. N., & Wanless, A. (2021). Countries have more than 100 laws on the books to combat misinformation. How well do they work? Bulletin of the Atomic Scientists, 77(3), 124–128. https://doi.org/10.1080/00963402.2021.1912111

Zeng, J. (2021). Theoretical typology of deceptive content (conspiracy theories). DOCA - Database of Variables for Content Analysis, 1(5). https://doi.org/10.34778/5g

Zeng, J., Chan, C. H., & Fu, K. W. (2017). How social media construct ‘truth’ around crisis events: Weibo’s rumor management strategies after the 2015 Tianjin blasts. Policy & Internet, 9(3), 297–320. https://doi.org/10.1002/poi3.155

Zeng, J., & Schäfer, S. M. (2021). Conceptualizing ‘dark platforms’. Covid-19-related conspiracy theories on 8kun and Gab. Digital Journalism, 9(9), 1321–1343. https://doi.org/10.1080/21670811.2021.1938165

Add new comment