Types (Disinformation)





categories, degree of falseness, misleading, manipulation, false connection, satire, junk news


Disinformation can appear in various forms. Firstly, different formats can be manipulated, such as texts, images, and videos. Secondly, the amount and degree of falseness can vary, from completely fabricated content to decontextualized information to satire that intentionally misleads recipients. Therefore, the forms and format of disinformation might vary and differ not only between the supposedly clear categories of “true” and “false”.

Field of application/theoretical foundation:

Studies on types of disinformation are conducted in various fields, e.g. political communication, journalism studies, and media effects studies. Among other things, the studies identify the most common types of mis- or disinformation during certain events (Brennen, Simon, Howard, & Nielsen, 2020), analyze and categorize the behavior of different types of Twitter accounts (Linvill & Warren, 2020), and investigate the existence of serveral types of “junk news” in different national media landscapes (Bradshaw, Howard, Kollanyi, & Neudert, 2020; Neudert, Howard, & Kollanyi, 2019).

References/combination with other methods of data collection:

Only relatively few studies use combinations of methods. Some studies identify different types of disinformation via qualitative and quantitative content analyses (Bradshaw et al., 2020; Brennen et al., 2020; Linvill & Warren, 2020; Neudert et al., 2019). Others use surveys to analyze respondents’ concerns as well as exposure towards different types of mis- and disinformation (Fletcher, 2018).

Example studies:

Brennen et al. (2020); Bradshaw et al. (2020); Linvill and Warren (2020)


Information on example studies:

Types of disinformation are defined by the presentation and contextualization of content and sometimes additionally by details (e.g. professionalism) about the communicator. Studies either deductively identify different types of disinformation (Brennen et al., 2020) by applying the theoretical framework by Wardle (2019), or additionally inductively identify and build different categories based on content analyses (Bradshaw et al., 2020; Linvill & Warren, 2020).


Table 1. Types of mis-/disinformation by Brennen et al. (2020)



Satire or parody


False connection

Headlines, visuals or captions don’t support the content

Misleading content

Misleading use of information to frame an issue or individual, when facts/information are misrepresented or skewed

False context

Genuine content is shared with false contextual information, e.g. real images which have been taken out of context

Imposter content

Genuine sources, e.g. news outlets or government agencies, are impersonated

Fabricated content

Content is made up and 100% false; designed to deceive and do harm

Manipulated content

Genuine information or imagery is manipulated to deceive, e.g. deepfakes or other kinds of manipulation of audio and/or visuals

Note. The categories are adapted from the theoretical framework by Wardle (2019). The coding instruction was: “To the best of your ability, what type of misinformation is it? (Select one that fits best.)” (Brennen et al., 2020, p. 12). The coders reached an intercoder reliability of a Cohen’s kappa of 0.82.


Table 2. Criteria for the “junk news” label by Bradshaw et al. (2020)





refers to the information about authors and the organization

“Sources do not employ the standards and best practices of professional journalism, including information about real authors, editors, and owners” (pp. 174-175). “Distinct from other forms of user-generated content and citizen journalism, junk news domains satisfy the professionalism criterion because they purposefully refrain from providing clear information about real authors, editors, publishers, and owners, and they do not publish corrections of debunked information” (p. 176).


-        Systematically checked the about pages of domains: Contact information, information about ownership and editors, and other information relating to professional standards

-        Reviewed whether the sources appeared in third-party fact-checking reports

-        Checked whether sources published corrections of fact-checked reporting.

Examples: zerohedge.com, conservative- fighters.org, deepstatenation.news


refers to the layout and design of the domain itself

“(…) [S]ources mimic established news reporting by using certain fonts, having branding, and employing content strategies. (…) Junk news is stylistically disguised as professional news by the inclusion of references to news agencies and credible sources as well as headlines written in a news tone with date, time, and location stamps. In the most extreme cases, outlets will copy logos and counterfeit entire domains” (p. 176).


-        Systematically reviewed organizational information about the owner and headquarters by checking sources like Wikipedia, the WHOIS database, and third-party fact-checkers (like Politico or MediaBiasFactCheck)

-        Consulted country-specific expert knowledge of the media landscape in the US to identify counterfeiting websites.

Examples: politicoinfo.com, NBC.com.co


refers to the content of the domain as a whole

“ (…) [S]tyle is concerned with the literary devices and language used throughout news reporting. (…) Designed to systematically manipulate users for political purposes, junk news sources deploy propaganda techniques to persuade users at an emotional, rather than cognitive, level and employ techniques that include using emotionally driven language with emotive expressions and symbolism, ad hominem attacks, misleading headlines, exaggeration, excessive capitalization, unsafe generalizations, logical fallacies, moving images and lots of pictures or mobilizing memes, and innuendo (Bernays, 1928; Jowette & O’Donnell, 2012; Taylor, 2003). (…) Stylistically, problematic sources will employ propaganda and clickbait techniques to varying degrees. As a result, determining style can be highly complex and context dependent” (p. 177).


-        Examined at least five stories on the front page of each news source in depth during the US presidential campaign in 2016 and the SOTU address in 2018

-        Checked the headlines of the stories and the content of the articles for literary and visual propaganda devices

-        Considered as stylistically problematic if three of the five stories systematically exhibited elements of propaganda

Examples: 100percentfedup.com, barenakedislam.com, theconservativetribune.com, dangerandplay.com


refers to the content of the domain as a whole

“(…) [S]ources rely on false information or conspiracy theories and do not post corrections” (p. 175). “[They] typically report on unsubstantiated claims and rely on conspiratorial and dubious sources. (…) Junk news sources that satisfy the credibility criterion frequently fail to vet their sources, do not consult multiple sources, and do not fact-check” (p. 178).


-        Examined at least five front page stories and reviewed the sources that were cited

-        Reviewed pages to see if they included known conspiracy theories on issues such as climate change, vaccination, and “Pizzagate”

-        Checked third-party fact-checkers for evidence of debunked stories and conspiracy theories

Examples: infowars.com, endingthefed.com, thegatewaypundit.com, newspunch.com


refers to the content of the domain as a whole

“(…) [H]yper-partisan media websites and blogs (…) are highly biased, ideologically skewed, and publish opinion pieces as news. Basing their stories on the same events, these sources manage to convey strikingly different impressions of what actually transpired. It is such systematic differences in the mapping from facts to news reports that we call bias. (…) Bias exists on both sides of the political spectrum. Like determining style, determining bias can be highly complex and context dependent” (pp. 177-178).


-        Checked third-party sources that systematically evaluate media bias

-        If the domain was not evaluated by a third party, the authors examined the ideological leaning of the sources used to support stories appearing on the domain

-        Evaluation of the labeling of politicians (are there differences between the left and the right?)

-        Identified bias created through the omission of unfavorable facts, or through writing that is falsely presented as being objective

Examples on the right: breitbart.com, dailycaller.com, infowars.com, truthfeed.com

Examples on the left: occupydemocrats.com, addictinginfo.com, bipartisanreport.com

Note. The coders reached an intercoder reliability of a Krippendorff’s kappa of 0.89. The label of “junk news” is defined by fulfilling at least three of the five criteria. It refers to sources that deliberately publish misleading, deceptive, or incorrect information packaged as real news.


Table 3. Identified types of IRA-associated Twitter accounts by Linvill and Warren (2020)



Right troll

“Twitter-handles broadcast nativist and right-leaning populist messages. These handles’ themes were distinct from mainstream Republicanism. (…) They rarely broadcast traditionally important Republican themes, such as taxes, abortion, and regulation, but often sent divisive messages about mainstream and moderate Republicans. (…) The overwhelming majority of handles, however, had limited identifying information, with profile pictures typically of attractive, young women” (p. 5).

Hashtags frequently used by these accounts: #MAGA (i.e., “Make America Great Again,”), #tcot (i.e. “Top Conservative on Twitter), #AmericaFirst, and #IslamKills

Left troll

“These handles sent socially liberal messages, with an overwhelming focus on cultural identity. (…) They discussed gender and sexual identity (e.g., #LGBTQ) and religious identity (e.g., #MuslimBan), but primarily focused on racial identity. Just as the Right Troll handles attacked mainstream Republican politicians, Left Troll handles attacked mainstream Democratic politicians, particularly Hillary Clinton. (…) It is worth noting that this account type also included a substantial portion of messages which had no clear political motivation” (p. 6).

Hashtags frequently used by these accounts: #BlackLivesMatter, #PoliceBrutality, and #BlackSkinIsNotACrime


“These handles overwhelmingly presented themselves as U.S. local news aggregators and had descriptive names (…). These accounts linked to legitimate regional news sources and tweeted about issues of local interest (…). A small number of these handles, (…) tweeted about global issues, often with a pro-Russia perspective” (p. 6).

Hashtags frequently used by these accounts: #news, #sports, and #local

Hashtag gamer

“These handles are dedicated almost entirely to playing hashtag games, a popular word game played on Twitter. Users add a hashtag to a tweet (e.g., #ThingsILearnedFromCartoons) and then answer the implied question. These handles also posted tweets that seemed organizational regarding these games (…). Like some tweets from Left Trolls, it is possible such tweets were employed as a form of camouflage, as a means of accruing followers, or both. Other tweets, however, often using the same hashtag as mundane tweets, were socially divisive (…)” (p. 7).

Hashtags frequently used by these accounts: #ToDoListBeforeChristmas, #ThingsYouCantIgnore, #MustBeBanned, and #2016In4Words


“These accounts spread disinformation regarding fabricated crisis events, both in the U.S. and abroad. Such events included non-existent outbreaks of Ebola in Atlanta and Salmonella in New York, an explosion at the Columbian Chemicals plan in Louisiana, a phosphorus leak in Idaho, as well as nuclear plant accidents and war crimes perpetrated in Ukraine. (…) These accounts typically tweeted a great deal of innocent, often frivolous content (i.e. song lyrics or lines of poetry) which were potentially automated. With this content these accounts often added popular hashtags such as #love (…) and #rap (…). These accounts changed behavior sporadically to tweet disinformation, and that output was produced using a different Twitter client than the one used to produce the frivolous content. (…) The Fearmonger category was the only category where we observed some inconsistency in account activity. A small number of handles tweeted briefly in a manner consistent with the Right Troll category but switched to tweeting as a Fearmonger or vice-versa” (p. 7).

Hashtags frequently used by these accounts: #Fukushima2015 and #ColumbianChemicals

Note. The categories were identified qualitatively analyzing the content produced and were then refined and explored more detailed via a quantitative analysis. The coders reached a Krippendorff’s alpha intercoder-reliability of 0.92.



Bradshaw, S., Howard, P. N., Kollanyi, B., & Neudert, L.?M. (2020). Sourcing and automation of political news and information over social media in the United States, 2016-2018. Political Communication, 37(2), 173–193.

Brennen, J. S., Simon, F. M., Howard, P. N. [P. N.], & Nielsen, R. K. (2020). Types, sources, and claims of covid-19 misinformation. Reuters Institute. Retrieved from http://www.primaonline.it/wp-content/uploads/2020/04/COVID-19_reuters.pdf

Fletcher, R. (2018). Misinformation and disinformation unpacked. Reuters Institute. Retrieved from http://www.digitalnewsreport.org/survey/2018/misinformation-and-disinformation-unpacked/

Linvill, D. L., & Warren, P. L. (2020). Troll factories: Manufacturing specialized disinformation on Twitter. Political Communication, 1–21.

Neudert, L.?M., Howard, P., & Kollanyi, B. (2019). Sourcing and automation of political news and information during three European elections. Social Media + Society, 5(3). https://doi.org/10.1177/2056305119863147

Wardle, C. (2019). First Draft's essential guide to understanding information disorder. UK: First Draft News. Retrieved from https://firstdraftnews.org/wp-content/uploads/2019/10/Information_Disorder_Digital_AW.pdf?x76701



How to Cite

Staender, A., & Humprecht, E. (2021). Types (Disinformation). DOCA - Database of Variables for Content Analysis, 1(4). https://doi.org/10.34778/4e



(Professional) Communicators & Organisational/Strategic Communication