EC Library Guide on disinformation and misinformation: Selected articles
Selected articles
- (Mis)information, information literacy, and democracy: Paths for pedagogy to foster informed citizenship
Lupien, P. and Rourke, L., Journal of Information Literacy, 15 (3), 2021.
The current political climate is characterized by an alarming pattern of global democratic regression driven by authoritarian populist leaders who deploy vast misinformation campaigns. These offensives are successful when the majority of the population lack skills that would allow them to think critically about information in the political sphere, to identify misinformation, and therefore to fully exercise democratic citizenship. Political science has theorized the link between information and power and information professionals understand the cognitive decision-making process involved in processing information, but these two literatures rarely intersect. This paper interrogates the links between information literacy (IL) and the rise of authoritarian populism in order to advance the development of a new transtheoretical model that links political science (which studies power), information science, and critical pedagogy to suggest new paths for teaching and research. We call for a collaborative research and teaching agenda, grounded in a holistic understanding of information as power, that will contribute to achieving a more informed citizenship and promoting a more inclusive democracy.
- AI model GPT-3 (dis)informs us better than humans
Spitale, G., Biller-Andorno, N. and Germani, F., Science Advances, 9 (26), 2023.
Artificial intelligence (AI) is changing the way we create and evaluate information, and this is happening during an infodemic, which has been having marked effects on global health. Here, we evaluate whether recruited individuals can distinguish disinformation from accurate information, structured in the form of tweets, and determine whether a tweet is organic or synthetic, i.e., whether it has been written by a Twitter user or by the AI model GPT-3. The results of our preregistered study, including 697 participants, show that GPT-3 is a double-edge sword: In comparison with humans, it can produce accurate information that is easier to understand, but it can also produce more compelling disinformation. We also show that humans cannot distinguish between tweets generated by GPT-3 and written by real Twitter users. Starting from our results, we reflect on the dangers of AI for disinformation and on how information campaigns can be improved to benefit global health.
- Assessing the consistency of fact-checking in political debates
Lelo, T., Journal of Communication, (August), 2023.
In the scholarly literature on journalism and political communication, there has been an expectation that fact-checkers would play an important role in ensuring democratic accountability, especially during pivotal political moments. This piece scrutinizes the level of agreement between five Brazilian fact-checking groups and the reasons for divergences in their verdicts during the presidential debates of the 2022 campaign. The emphasis is on claims checked by two or more organizations. Through a mixed-methods approach, it shows a widespread lack of consistency among fact-checkers, which is explained by their conflicting methods and interpretations of candidates’ words. This study adds to the existing scholarship by challenging the dominant framework on fact-checking, putting into question its democracy-building role in critical circumstances, as well as the epistemology it relies on to assess the veracity of political discourse. Complementary, it introduces a valuable methodology for studying the rationale underlying fact-checking ratings.
- Cultural dimensions of fake news exposure: A cross-national analysis among European Union countries
Arrese, Á., Mass Communication & Society, (October), 2022.
This article analyzes the extent to which certain cultural dimensions explain the intensity with which citizens of different countries perceive the presence of fake news in their daily lives. The research is based on the Flash Eurobarometer survey conducted in 2018 about fake news and disinformation online in 25 European countries, and adopts the Hofstede cultural dimensions as a model of cultural analysis. The study uses multilevel regression analysis to test individual and macro-level indicators that explain variations in perceptions of fake news. The findings reveal a clear, direct relationship between uncertainty avoidance, masculinity, and fake news exposure, as well as an interaction of these cultural dimensions with age, but not with the other individual and media use related variables. These results have theoretical and practical implications, especially from the point of view of the design of public policies to fight disinformation in the European Union (EU).
- Deepfakes and scientific knowledge dissemination
Doss, C., Mondschein, J., Shu, D. et al.Scientific Reports, 13 (August), 2023.
Science misinformation on topics ranging from climate change to vaccines have significant public policy repercussions. Artificial intelligence-based methods of altering videos and photos (deepfakes) lower the barriers to the mass creation and dissemination of realistic, manipulated digital content. The risk of exposure to deepfakes among education stakeholders has increased as learners and educators rely on videos to obtain and share information. This is the first study to understand the vulnerabilities of education stakeholders to science deepfakes and the characteristics that moderate vulnerability. The study was based on climate change and survey individuals from five populations spanning students, educators, and the adult public. The sample is nationally representative of three populations.
It was found that 27–50% of individuals cannot distinguish authentic videos from deepfakes. All populations exhibit vulnerability to deepfakes which increases with age and trust in information sources but has a mixed relationship with political orientation. Adults and educators exhibit greater vulnerability compared to students, indicating that those providing education are especially susceptible. Vulnerability increases with exposure to potential deepfakes, suggesting that deepfakes become more pernicious without interventions. The results suggest that focusing on the social context in which deepfakes reside is one promising strategy for combatting deepfakes.
- Digital Propaganda: The Power of Influencers
Woolley, S.C., Journal of Democracy, 33 (3), 2022.
Attempts to manipulate public opinion using social media and emerging information communication technologies (ICTs) continue to proliferate internationally. Governments, corporations, extremist groups, and a wide variety of other entities around the globe now commonly use both automated bots and anonymous human "sockpuppet" accounts in efforts to amplify and suppress particular streams of information during elections, security crises, and other pivotal events. They use these same tools to sow disinformation and engage in organized political trolling campaigns. However, the technologies and tactics used in these internet-based "influence operations" are changing.
This essay leverages insights from over 70 interviews with people who both produce and track online manipulation campaigns. It compares emerging trends in digital disinformation and computational propaganda across the globe using qualitative data from 12 countries—Burma, Brazil, Egypt, Eritrea, Ethiopia, India, Indonesia, Mexico, the Philippines, Turkey, Ukraine, and the United States. In essence, internet-borne manipulation efforts are evolving from relatively unsophisticated "inorganic" campaigns pushed by social media bots and towards more complex "semi-organic" efforts combining both coordinated human users and artificial intelligence software. Additional, related, trends include the increased coercive political use of social media influencers and encrypted and private messaging applications.
- The European approach to online disinformation: Geopolitical and regulatory dissonance
Casero-Ripollés, A., Tuñón, J. and Bouza-García, L., Humanities & Social Sciences Communications, 10 (1), 2023.
The COVID-19 health crisis and the invasion of Ukraine have placed disinformation in the focus of European policies. Our aim is to analyze the emerging European policy on counter-disinformation practices and regulations. To do this, we examine developing European Union (EU) strategy, against different forms of fake news, from a multidisciplinary approach that combines Journalism and Geopolitics. Our methodology is based on the critical analysis of documents generated by the EU on disinformation from 2018 to 2022, including reports, communications, statements and other legislative texts. Our findings suggest that the EU’s policy against disinformation is based on two opposing logics that coexist and compete.
The first is securitization, which understands this problem as a threat to democracy that legitimizes ‘exceptional decision-making’ from a hard power perspective. The second is based on the self-regulation and voluntarism of digital platforms with a clear orientation towards soft law and minimal intervention. The recent adoption of the Digital Services Act and the stronger regulation of online platforms do not replace this logic, since this legislation adopts a “co-regulatory framework”. The coexistence of these two logics generates internal contradictions and dissonance that can determine the future of European policies on this important topic and its chances of success.
- Genesis and evolution of EU anti disinformation policy: Entrepreneurship and political opportunism in the regulation of digital technology
Datzer, V. and Lonardo, L., Journal of European Integration, 45 (5), 2023.
Disinformation, the deliberate spread of false or misleading information, is a worrisome threat for the EU, as its uses in recent events such as the Russian invasion of Ukraine in 2022 reveal. This article explores policy-formulation in the EU. It asks whether it is possible to explain the choice of the European Commission and of the European External Action Service (EEAS) to regulate disinformation in terms of political opportunism: using process tracing analysis supported by interviews with EU officials, this article finds that the European Commission sought to create an opportunity to regulate this matter because it considered it particularly salient, and that, contrary to what the literature on political opportunism might suggest, both the EEAS and the Commission can be considered the political entrepreneur in this domain, because the engagements against disinformation were led by an external threat perception.
- Health misinformation and freedom of expression: Considerations for policymakers
Marecos, J., Shattock, E., Bartlett, O., et al., Health Economics, Policy and Law, 18 (2), 2023.
Health misinformation, most visibly following the COVID-19 infodemic, is an urgent threat that hinders the success of public health policies. It likely contributed, and will continue to contribute, to avoidable deaths. Policymakers around the world are being pushed to tackle this problem. Legislative acts have been rolled out or announced in many countries and at the European Union level. The goal of this paper is not to review particular legislative initiatives, or to assess the impact and efficacy of measures implemented by digital intermediaries, but to reflect on the high constitutional and ethical stakes involved in tackling health misinformation through speech regulation. Our findings suggest that solutions focused on regulating speech are likely to encounter significant constraints, as policymakers grasp with the limitations imposed by freedom of expression and ethical considerations. Solutions focused on empowering individuals – such as media literacy initiatives, fact-checking or credibility labels – are one way to avoid such hurdles.
- How behavioural sciences can promote truth, autonomy and democratic discourse online
Lorenz-Spreen, P., Lewandowsky, S., Sunstein, C.R., et al., Nature Human Behaviour, 4 (11), 2020.
Public opinion is shaped in significant part by online content, spread via social media and curated algorithmically. The current online ecosystem has been designed predominantly to capture user attention rather than to promote deliberate cognition and autonomous choice; information overload, finely tuned personalization and distorted social cues, in turn, pave the way for manipulation and the spread of false information. How can transparency and autonomy be promoted instead, thus fostering the positive potential of the web? Effective web governance informed by behavioural research is critically needed to empower individuals online. We identify technologically available yet largely untapped cues that can be harnessed to indicate the epistemic quality of online content, the factors underlying algorithmic decisions and the degree of consensus in online debates. We then map out two classes of behavioural interventions-nudging and boosting- that enlist these cues to redesign online environments for informed and autonomous choice.
- La désinformation, un enjeu sécuritaire majeur pour l’UE et l’Otan: Quelles perspectives pour les relations transatlantiques?
Hoorickx, E., Revue Défense Nationale, 845 (10), 2021.
La désinformation est une réalité dont la dimension stratégique participe à la fragilisation et à la déstabilisation des démocraties, particulièrement en Europe. L’UE et l’Otan se sont dotées d’outils de lutte contre les fake news, mais l’effort reste insuffisant face à ce problème complexe.
- Separating truth from lies: Comparing the effects of news media literacy interventions and fact-checkers in response to political misinformation in the US and Netherlands
Hameleers, M., Information, Communication & Society, 25 (1), 2022.
Although previous research has offered important insights into the consequences of mis- and disinformation and the effectiveness of corrective information, we know markedly less about how different types of corrective information - news media literacy interventions and fact-checkers - can be combined to counter different forms of misinformation. Against this backdrop, this paper reports on experiments in the US and the Netherlands (N = 1,091) that exposed people to evidence-based or fact-free anti-immigration misinformation, fact-checkers and/or a media literacy intervention. The main findings indicate that evidence-based misinformation is seen as more accurate than fact-free misinformation, and the combination of news media literacy interventions and fact-checkers is most effective in lowering issue agreement and perceived accuracy of misinformation across countries. These findings have important implications for journalism practice and policy makers that aim to combat mis- and disinformation.
- Sociotechnical governance of misinformation: An annual review of Information Science and Technology (Arist) paper
Sanfilippo, M.R., Zhu, X.A. and Yang, S., Journal of the Association for Information Science and Technology, (October), 2024.
Misinformation is a complex and urgent sociotechnical problem that requires meaningful governance, in addition to technical efforts aimed at detection or classification and intervention or literacy efforts aimed at promoting awareness and identification. This review draws on interdisciplinary literature—spanning information science, computer science, management, law, political science, public policy, journalism, communications, psychology, and sociology—to deliver an adaptable, descriptive governance model synthesized from past scholarship on the governance of misinformation. Crossing disciplines and contexts of study and cases, we characterize: the complexity and impact of misinformation as a governance challenge, what has been managed and governed relative to misinformation, the institutional structure of different governance parameters, and empirically identified sources of success and failure in different governance models.
Our approach to support this review is based on systematic, structured literature review methods to synthesize and compare insights drawn from conceptual, qualitative, and quantitative empirical works published in or translated into English from 1991 to the present. This review contributes a model for misinformation governance research, an agenda for future research, and recommendations for contextually-responsive and holistic governance.
- Sociotechnical imaginaries of algorithmic governance in EU policy on online disinformation and FinTech
Wijermars, M., Makhortykh, M., Pötzsch, H., et al., New Media & Society, 24 (4), 2022.
Datafication and the use of algorithmic systems increasingly blur distinctions between policy fields. In the financial sector, for example, algorithms are used in credit scoring, money has become transactional data sought after by large data-driven companies, while financial technologies (FinTech) are emerging as a locus of information warfare. To grasp the context specificity of algorithmic governance and the assumptions on which its evaluation within different domains is based, the article comparatively studies the sociotechnical imaginaries of algorithmic governance in European Union (EU) policy on online disinformation and FinTech.
The autors find that sociotechnical imaginaries prevalent in EU policy documents on disinformation and FinTech are highly divergent. While the first can be characterized as an algorithm-facilitated attempt to return to the presupposed status quo (absence of manipulation) without a defined future imaginary, the latter places technological innovation at the centre of realizing a globally competitive Digital Single Market.
- Strategies for combating the scourge of digital disinformation
Pherson, R.H., Mort Ranta, P. and Cannon, C., International Journal of Intelligence and Counterintelligence, 34 (2), 2021.
Partisan political actors and social manipulators are increasingly using social media platforms to reshape popular perceptions for partisan political or social purposes. This process is rendering democratic processes more vulnerable and inhibiting constructive social dialog. These attacks on liberal institutions, electoral processes, and social norms have come from a variety of sources. This article has employed Foresight Analysis techniques to identify key drivers, establish proposed solutions that address the totality of the threat, and evaluate the scope of the challenges involved in implementing each. It identifies two key drivers that will most likely shape the digital environment, and assesses the strengths and weaknesses of each approach, focusing on the strategy's ability to minimize the damage done to social norms and democratic institutions.
- Transforming practices of diplomacy: The European External Action Service and digital disinformation
Hedling, E., International Affairs, 97 (3), 2021.
This article explores the transformative role of practices of countering digital disinformation in European Union diplomacy. It argues that an overlooked dimension of the change brought by the rise of digital disinformation is located in the emergence of everyday countering practices. Efforts to counter disinformation have led to the recruitment of new actors with different dispositions and skill sets than those of traditional diplomats and state officials in diplomatic organizations such as the European External Action Service.
Focusing on the countering efforts by the East StratCom Task Force, a unit introduced in 2015, the article argues that the composition of actors, the task force's practices and the reorientation in audience perception it reflected, contributed significantly to institutional transformation. Drawing on 23 interviews with key actors and building on recent advancements in international practice theory, the article shows how change and transformation can be studied in practices that have resulted from digitalization in international politics. The article thus contributes to an increased understanding of the digitalization of diplomacy in which new practices can emerge from both deliberate reflection and experimentation.
- What constitutes disinformation? Disinformation judgment, influence of partisanship, and support for anti-disinformation legislation
Lee, F.L.F., Journalism & Mass Communication Quarterly, (May), 2022.
This study examines people’s judgment of what constitutes disinformation, how partisanship shapes such judgment, and how broadness of disinformation judgment relates to perceptions of the disinformation problem and support for anti-disinformation legislation. Analysis of a Hong Kong survey shows that many citizens are willing to treat a wide range of problematic news materials as disinformation. Partisans tend to treat counter-attitudinal materials as disinformation, but the influence of partisanship can be reduced by the norm of evenhandedness. Besides, broadness of disinformation judgment—especially anti-government disinformation judgment—relates positively with the perceived severity and impact of disinformation and support for legislation.
- Last Updated: Oct 7, 2024 4:44 PM
- URL: https://ec-europa-eu.libguides.com/disinformation
- Print Page