8 research outputs found

    Towards a Positioning Model for Evaluating the Use and Design of Anti-Disinformation Tools

    No full text
    With the increasing amounts of mis- and disinformation circulating online, the demand for tools to combat and contain the phenomenon has also increased. The multifaceted nature of the phenomenon requires a set of tools that can respond effectively, and can deal with the different ways in which disinformation can present itself, In this paper, after consulting independent fact-checkers to create a list, we map the landscape of tools available to combat different typologies of mis and disinformation on the basis of three levels of analysis: the employment of policy-regulated strategies, the use of co-creation, and the preference for manual or automated processes of detection. We then create a model in which we position the different tools across three axes of analysis, and show how the tools distribute across different market positions

    A Conceptual Model for Approaching the Design of Anti-disinformation Tools

    No full text
    Part 2: Digital SocietyInternational audienceWith the increasing amounts of mis- and disinformation circulating online, the demand for tools to combat and contain the phenomenon has also increased. The multifaceted nature of the phenomenon requires a set of tools that can respond effectively, and can deal with the different ways in which disinformation can present itself, such as text, images, and videos, the agents responsible for spreading it, and the various platforms on which incorrect information is prevalent. In this paper, after consulting independent fact-checkers to create a list, we map the landscape of the most known tools that are available to combat different typologies of mis and disinformation on the basis of three levels of analysis: the employment of policy-regulated strategies, the use of co-creation, and the preference for manual or automated processes of detection. We then create a model in which we position the different tools across three axes of analysis, and show how the tools distribute across different market positions. The most crowded positions are characterized by tools that employ automated processes of detection, varying degrees of policy implementation, and low levels of co-creation, but there is an opening for newly developed tools that score high across all three axes. The interest in co-creative efforts in the challenge towards addressing mis- and disinformation could indeed be an effective solution to cater to the need of the users, and respond effectively to the amounts and variety of mis and disinformation spreading online

    Understanding the Role of Human Values in the Spread of Misinformation

    No full text
    Social media platforms are often implicated in the spread of misinformation for encouraging the behaviour of rapid-sharing without adequate mechanisms for verifying information. To counter this phenomena, much related research in computer science has been focusing on developing tools to detect misinformation, to rank fact-check-worthy claims, and to understand their spread patterns, while psychosocial approaches have been focused on understanding information literacy, ideology and partisanship. In this paper, we demonstrate through a survey of nearly 100 people that the Human Values could have a significant influence on the way people perceive and share information. We argue that integrating a valuesoriented perspective into computational approaches for handling misinformation could encourage misinformation prevention, and assist in predicting and ranking misinformation

    SOCIAL MEDIA USE, TRUST AND TECHNOLOGY ACCEPTANCE: INVESTIGATING THE EFFECTIVENESS OF A CO-CREATED BROWSER PLUGIN IN MITIGATING THE SPREAD OF MISINFORMATION ON SOCIAL MEDIA

    Get PDF
    Social media have become online spaces where misinformation abounds and spreads virally in the absence of professional gatekeeping. This information landscape requires everyday citizens, who rely on these technologies to access information, to cede control of information. This work sought to examine whether the control of information can be regained by humans with the support of a co-created browser plugin, which integrated credibility labels and nudges, and was informed by artificial intelligence models and rule engines. Given the literature on the complexity of information evaluation on social media, we investigated the role of technological, situational and individual characteristics in “liking” or “sharing” misinformation. We adopted a mixed-methods research design with 80 participants from four European sites, who viewed a curated timeline of credible and non-credible posts on Twitter, with (n=40) or without (n=40) the presence of the plugin. The role of the technological intervention was important: the absence of the plugin strongly correlated with misinformation endorsement (via “liking”). Trust in the technology and technology acceptance were correlated and emerged as important situational characteristics, with participants with higher trust profiles being less likely to share misinformation. Findings on individual characteristics indicated that only social media use was a significant predictor for trusting the plugin. This work extends ongoing research on deterring the spread of misinformation by situating the findings in an authentic social media environment using a co-created technological intervention. It holds implications for how to support a misinformation-resilient citizenry with the use of artificial intelligence-driven tools

    Combating misinformation online: re-imagining social media for policy-making

    Get PDF
    Social media have created communication channels between citizens and policymakers but are also susceptible to rampant misinformation. This new context demands new social media policies that can aid policymakers in making evidence-based decisions for combating misinformation online. This paper reports on data collected from policymakers in Austria, Greece, and Sweden, using focus groups and in-depth interviews. Analyses provide insights into challenges and identify four important themes for supporting policy-making for combating misinformation: a) creating a trusted network of experts and collaborators, b) facilitating the validation of online information, c) providing access to visualisations of data at different levels of granularity, and d) increasing the transparency and explainability of flagged misinformative content. These recommendations have implications for rethinking how revised social media policies can contribute to evidence-based decision-making
    corecore