1,753 research outputs found

    Addressing contingency in algorithmic (mis)information classification: Toward a responsible machine learning agenda

    Full text link
    Machine learning (ML) enabled classification models are becoming increasingly popular for tackling the sheer volume and speed of online misinformation and other content that could be identified as harmful. In building these models, data scientists need to take a stance on the legitimacy, authoritativeness and objectivity of the sources of ``truth" used for model training and testing. This has political, ethical and epistemic implications which are rarely addressed in technical papers. Despite (and due to) their reported high accuracy and performance, ML-driven moderation systems have the potential to shape online public debate and create downstream negative impacts such as undue censorship and the reinforcing of false beliefs. Using collaborative ethnography and theoretical insights from social studies of science and expertise, we offer a critical analysis of the process of building ML models for (mis)information classification: we identify a series of algorithmic contingencies--key moments during model development that could lead to different future outcomes, uncertainty and harmful effects as these tools are deployed by social media platforms. We conclude by offering a tentative path toward reflexive and responsible development of ML tools for moderating misinformation and other harmful content online.Comment: Andr\'es Dom\'inguez Hern\'andez, Richard Owen, Dan Saattrup Nielsen and Ryan McConville. 2023. Addressing contingency in algorithmic (mis)information classification: Toward a responsible machine learning agenda. Accepted in 2023 ACM Conference on Fairness, Accountability, and Transparency (FAccT '23), June 12-15, 2023, Chicago, United States of America. ACM, New York, NY, USA, 16 page

    Human Computer Interaction and Emerging Technologies

    Get PDF
    The INTERACT Conferences are an important platform for researchers and practitioners in the field of human-computer interaction (HCI) to showcase their work. They are organised biennially by the International Federation for Information Processing (IFIP) Technical Committee on Human–Computer Interaction (IFIP TC13), an international committee of 30 member national societies and nine Working Groups. INTERACT is truly international in its spirit and has attracted researchers from several countries and cultures. With an emphasis on inclusiveness, it works to lower the barriers that prevent people in developing countries from participating in conferences. As a multidisciplinary field, HCI requires interaction and discussion among diverse people with different interests and backgrounds. The 17th IFIP TC13 International Conference on Human-Computer Interaction (INTERACT 2019) took place during 2-6 September 2019 in Paphos, Cyprus. The conference was held at the Coral Beach Hotel Resort, and was co-sponsored by the Cyprus University of Technology and Tallinn University, in cooperation with ACM and ACM SIGCHI. This volume contains the Adjunct Proceedings to the 17th INTERACT Conference, comprising a series of selected papers from workshops, the Student Design Consortium and the Doctoral Consortium. The volume follows the INTERACT conference tradition of submitting adjunct papers after the main publication deadline, to be published by a University Press with a connection to the conference itself. In this case, both the Adjunct Proceedings Chair of the conference, Dr Usashi Chatterjee, and the lead Editor of this volume, Dr Fernando Loizides, work at Cardiff University which is the home of Cardiff University Press

    FACTS-ON : Fighting Against Counterfeit Truths in Online social Networks : fake news, misinformation and disinformation

    Full text link
    L'évolution rapide des réseaux sociaux en ligne (RSO) représente un défi significatif dans l'identification et l'atténuation des fausses informations, incluant les fausses nouvelles, la désinformation et la mésinformation. Cette complexité est amplifiée dans les environnements numériques où les informations sont rapidement diffusées, nécessitant des stratégies sophistiquées pour différencier le contenu authentique du faux. L'un des principaux défis dans la détection automatique de fausses informations est leur présentation réaliste, ressemblant souvent de près aux faits vérifiables. Cela pose de considérables défis aux systèmes d'intelligence artificielle (IA), nécessitant des données supplémentaires de sources externes, telles que des vérifications par des tiers, pour discerner efficacement la vérité. Par conséquent, il y a une évolution technologique continue pour contrer la sophistication croissante des fausses informations, mettant au défi et avançant les capacités de l'IA. En réponse à ces défis, ma thèse introduit le cadre FACTS-ON (Fighting Against Counterfeit Truths in Online Social Networks), une approche complète et systématique pour combattre la désinformation dans les RSO. FACTS-ON intègre une série de systèmes avancés, chacun s'appuyant sur les capacités de son prédécesseur pour améliorer la stratégie globale de détection et d'atténuation des fausses informations. Je commence par présenter le cadre FACTS-ON, qui pose les fondements de ma solution, puis je détaille chaque système au sein du cadre : EXMULF (Explainable Multimodal Content-based Fake News Detection) se concentre sur l'analyse du texte et des images dans les contenus en ligne en utilisant des techniques multimodales avancées, couplées à une IA explicable pour fournir des évaluations transparentes et compréhensibles des fausses informations. En s'appuyant sur les bases d'EXMULF, MythXpose (Multimodal Content and Social Context-based System for Explainable False Information Detection with Personality Prediction) ajoute une couche d'analyse du contexte social en prédisant les traits de personnalité des utilisateurs des RSO, améliorant la détection et les stratégies d'intervention précoce contre la désinformation. ExFake (Explainable False Information Detection Based on Content, Context, and External Evidence) élargit encore le cadre, combinant l'analyse de contenu avec des insights du contexte social et des preuves externes. Il tire parti des données d'organisations de vérification des faits réputées et de comptes officiels, garantissant une approche plus complète et fiable de la détection de la désinformation. La méthodologie sophistiquée d'ExFake évalue non seulement le contenu des publications en ligne, mais prend également en compte le contexte plus large et corrobore les informations avec des sources externes crédibles, offrant ainsi une solution bien arrondie et robuste pour combattre les fausses informations dans les réseaux sociaux en ligne. Complétant le cadre, AFCC (Automated Fact-checkers Consensus and Credibility) traite l'hétérogénéité des évaluations des différentes organisations de vérification des faits. Il standardise ces évaluations et évalue la crédibilité des sources, fournissant une évaluation unifiée et fiable de l'information. Chaque système au sein du cadre FACTS-ON est rigoureusement évalué pour démontrer son efficacité dans la lutte contre la désinformation sur les RSO. Cette thèse détaille le développement, la mise en œuvre et l'évaluation complète de ces systèmes, soulignant leur contribution collective au domaine de la détection des fausses informations. La recherche ne met pas seulement en évidence les capacités actuelles dans la lutte contre la désinformation, mais prépare également le terrain pour de futures avancées dans ce domaine critique d'étude.The rapid evolution of online social networks (OSN) presents a significant challenge in identifying and mitigating false information, which includes Fake News, Disinformation, and Misinformation. This complexity is amplified in digital environments where information is quickly disseminated, requiring sophisticated strategies to differentiate between genuine and false content. One of the primary challenges in automatically detecting false information is its realistic presentation, often closely resembling verifiable facts. This poses considerable challenges for artificial intelligence (AI) systems, necessitating additional data from external sources, such as third-party verifications, to effectively discern the truth. Consequently, there is a continuous technological evolution to counter the growing sophistication of false information, challenging and advancing the capabilities of AI. In response to these challenges, my dissertation introduces the FACTS-ON framework (Fighting Against Counterfeit Truths in Online Social Networks), a comprehensive and systematic approach to combat false information in OSNs. FACTS-ON integrates a series of advanced systems, each building upon the capabilities of its predecessor to enhance the overall strategy for detecting and mitigating false information. I begin by introducing the FACTS-ON framework, which sets the foundation for my solution, and then detail each system within the framework: EXMULF (Explainable Multimodal Content-based Fake News Detection) focuses on analyzing both text and image in online content using advanced multimodal techniques, coupled with explainable AI to provide transparent and understandable assessments of false information. Building upon EXMULF’s foundation, MythXpose (Multimodal Content and Social Context-based System for Explainable False Information Detection with Personality Prediction) adds a layer of social context analysis by predicting the personality traits of OSN users, enhancing the detection and early intervention strategies against false information. ExFake (Explainable False Information Detection Based on Content, Context, and External Evidence) further expands the framework, combining content analysis with insights from social context and external evidence. It leverages data from reputable fact-checking organizations and official social accounts, ensuring a more comprehensive and reliable approach to the detection of false information. ExFake's sophisticated methodology not only evaluates the content of online posts but also considers the broader context and corroborates information with external, credible sources, thereby offering a well-rounded and robust solution for combating false information in online social networks. Completing the framework, AFCC (Automated Fact-checkers Consensus and Credibility) addresses the heterogeneity of ratings from various fact-checking organizations. It standardizes these ratings and assesses the credibility of the sources, providing a unified and trustworthy assessment of information. Each system within the FACTS-ON framework is rigorously evaluated to demonstrate its effectiveness in combating false information on OSN. This dissertation details the development, implementation, and comprehensive evaluation of these systems, highlighting their collective contribution to the field of false information detection. The research not only showcases the current capabilities in addressing false information but also sets the stage for future advancements in this critical area of study

    Nurturing a Digital Learning Environment for Adults 55+

    Get PDF
    Being digitally competent means having competences in all areas of DigComp: Information and data literacy, Communication and collaboration, Digital content creation, Safety and Problem-solving. More than other demographic categories, adults 55+ have a wide range of levels of digitalization. Depending on their level of competences, individuals may join self-administered online courses to improve their skills, or they may need guidance from adult educators. Taking into consideration the above situation and willing to address adult learners regardless of their initial skill levels, the proposed educational programme is carefully designed for both: self-administrated and educator-led training. It comprises five totally innovative courses that can be separately taught or can be integrated into a complex programme delivered by adult education organizations. These courses are the result of an ERASMUS+ project “Digital Facilitator for Adults 55+”. Chapter 1 introduces the methodology for designing attractive and engaging educational materials for adults’ digital skills improvement. The methodology clarifies the inputs, the development process and the expected results. An ample explanation of the five phases of the 5E instructional strategy is presented to help adult educators build a sequence of coherent and engaging learning stages. With this approach, learners are supported to think, work, gather ideas, identify their own skill levels and needs, analyse their progress, and communicate with others under the guidance of educators. Following up on the proposed methodology, in Chapter 2 researchers from Formative Footprint (Spain), TEAM4Excellence (Romania), Voluntariat Pentru Viata (Romania) and Saricam Halk Egitimi Merkezi (Turkey) developed five course modules in line with the DIGCOMP - Digital Competence Framework for Citizens. These modules address the competence areas of information and data literacy, communication and collaboration, digital content creation, safety, and problem-solving. Each course module comprises digital textbooks, videos, interactive activities and means for evaluation developed using the 5E instructional model strategy. Understanding that accessibility is one of the main components of lifelong learning education, Chapter 3 of the manual provides an overview of the integration of educational materials, tools, instruments, video tutorials as well as DIFA55+ web app in the digital educational ecosystem. Finally, the authors formulate recommendations for usability and transferability that go beyond individuals, ensuring that educational materials are user-friendly and effective while making it easier to apply successful pedagogical approaches in other complementary educational contexts or projects.Grant Agreement—2021-1-RO01-KA220-ADU-000035297, Digital Facilitator for Adults 55

    An analysis on the media literacy efforts of Finland, Sweden, and Norway

    Get PDF
    Disinformation and harmful narratives are a threat to democracy and national security. Understanding how other countries promote media literacy, and eventually strengthen their cognitive resiliency, is important. Finland, Sweden, and Norway have a reputation for excellent media literacy. This study intends to shed light on the promotion of media literacy in these three Nordic countries. This is to create a better understanding of how these countries implement a robust and systemic media literacy efforts that is accepted by the population. To build this understanding, this study will answer the research question, “what are the similarities and differences between the media literacy programs in Finland, Sweden, and Norway?” To answer this research question, the qualitative method was selected for this thesis in the form of a case study. This study compares and evaluates the following aspects for each country: historical background, policy, roles, responsibilities and implementation, financing, and evaluations. Through the evaluation of these aspects, this study identified motivations, methods, and other factors that may have contributed to the excellent media literacy of these three countries. Through the analysis and discussion of focus areas, this study discovered how media literacy is promoted to foster participation and acceptance of media literacy efforts. This study found that nationally driven promotion of media literacy with a national policy/strategy together with an incorporation of media literacy in national curriculum stand out as key factors in the widespread implementation of media literacy efforts. Government documents evaluated in this study also revealed a strong emphasis on cross-governmental and cross-sectoral cooperation to promote media literacy. These three countries have demonstrated that a steadfast commitment to coordinate, collaborate, and implement quality media literacy education is paramount to shaping a well-functioning democracy and building a resilient populatio

    In Crowd Veritas: Leveraging Human Intelligence To Fight Misinformation

    Get PDF
    The spread of online misinformation has important effects on the stability of democracy. The sheer size of digital content on the web and social media and the ability to immediately access and share it has made it difficult to perform timely fact-checking at scale. Truthfulness judgments are usually made by experts, like journalists for political statements. A different approach can be relying on a (non-expert) crowd of human judges to perform fact-checking. This leads to the following research question: can such human judges detect and objectively categorize online (mis)information? Several extensive studies based on crowdsourcing are performed to answer. Thousands of truthfulness judgments over two datasets are collected by recruiting a crowd of workers from crowdsourcing platforms and the expert judgments are compared with the crowd ones. The results obtained allow for concluding that the workers are indeed able to do such. There is a limited understanding of factors that influence worker participation in longitudinal studies across different crowdsourcing marketplaces. A large-scale survey aimed at understanding how these studies are performed using crowdsourcing is run across multiple platforms. The answers collected are analyzed from both a quantitative and a qualitative point of view. A list of recommendations for task requesters to conduct these studies effectively is provided together with a list of best practices for crowdsourcing platforms. Truthfulness is a subtle matter: statements can be just biased, imprecise, wrong, etc. and a unidimensional truth scale cannot account for such differences. The crowd workers are asked to judge seven different dimensions of truthfulness selected based on existing literature. The newly collected crowdsourced judgments show that the workers are indeed reliable when compared to an expert-provided gold standard. Cognitive biases are human processes that often help minimize the cost of making mistakes but keep assessors away from an objective judgment of information. A review of the cognitive biases which might manifest during the fact-checking process is presented together with a list of countermeasures that can be adopted. An exploratory study on the previously collected data set is thus performed. The findings are used to formulate hypotheses concerning which individual characteristics of statements or judges and what cognitive biases may affect crowd workers' truthfulness judgments. The findings suggest that crowd workers' degree of belief in science has an impact, that they generally overestimate truthfulness, and that their judgments are indeed affected by various cognitive biases. Automated fact-checking systems to combat misinformation spreading exist, however, their complexity usually makes them opaque to the end user, making it difficult to foster trust in the system. The E-BART model is introduced with the hope of making progress on this front. E-BART can provide a truthfulness prediction for a statement, and jointly generate a human-readable explanation. An extensive human evaluation of the impact of explanations generated by the model is conducted, showing that the explanations increase the human ability to spot misinformation. The whole set of data collected and analyzed in this thesis is publicly released to the research community at: https://doi.org/10.17605/OSF.IO/JR6VC.The spread of online misinformation has important effects on the stability of democracy. The information that is consumed every day influences human decision-making processes. The sheer size of digital content on the web and social media and the ability to immediately access and share it has made it difficult to perform timely fact-checking at scale. Indeed, fact-checking is a complex process that involves several activities. A long-term goal can be building a so-called human-in-the-loop system to cope with (mis)information by measuring truthfulness in real-time (e.g., as they appear on some social media, news outlets, and so on) using a combination of crowd-powered data, human intelligence, and machine learning techniques. In recent years, crowdsourcing has become a popular method for collecting to collect reliable truthfulness judgments in order to scale up and help study the manual fact-checking effort. Initially, this thesis investigates whether human judges can detect and objectively categorize online (mis)information and which is the environment that allows obtaining the best results. Then, the impact of cognitive biases on human assessors while judging information truthfulness is addressed. A categorization of cognitive biases is proposed together with countermeasures to combat their effects and a bias-aware judgment pipeline for fact-checking. Lastly, an approach able to predict information truthfulness and, at the same time, generate a natural language explanation supporting the prediction itself is proposed. The machine-generated explanations are evaluated to understand whether they are useful for the human assessors to better judge the truthfulness of information items. A collaborative process between systems, crowd workers, and expert fact checkers would provide a scalable and decentralized hybrid mechanism to cope with the increasing volume of online misinformation

    Networks and trust: systems for understanding and supporting internet security

    Get PDF
    Includes bibliographical references.2022 Fall.This dissertation takes a systems-level view of the multitude of existing trust management systems to make sense of when, where and how (or, in some cases, if) each is best utilized. Trust is a belief by one person that by transacting with another person (or organization) within a specific context, a positive outcome will result. Trust serves as a heuristic that enables us to simplify the dozens decisions we make each day about whom we will transact with. In today's hyperconnected world, in which for many people a bulk of their daily transactions related to business, entertainment, news, and even critical services like healthcare take place online, we tend to rely even more on heuristics like trust to help us simplify complex decisions. Thus, trust plays a critical role in online transactions. For this reason, over the past several decades researchers have developed a plethora of trust metrics and trust management systems for use in online systems. These systems have been most frequently applied to improve recommender systems and reputation systems. They have been designed for and applied to varied online systems including peer-to-peer (P2P) filesharing networks, e-commerce platforms, online social networks, messaging and communication networks, sensor networks, distributed computing networks, and others. However, comparatively little research has examined the effects on individuals, organizations or society of the presence or absence of trust in online sociotechnical systems. Using these existing trust metrics and trust management systems, we design a set of experiments to benchmark the performance of these existing systems, which rely heavily on network analysis methods. Drawing on the experiments' results, we propose a heuristic decision-making framework for selecting a trust management system for use in online systems. In this dissertation we also investigate several related but distinct aspects of trust in online sociotechnical systems. Using network/graph analysis methods, we examine how trust (or lack of trust) affects the performance of online networks in terms of security and quality of service. We explore the structure and behavior of online networks including Twitter, GitHub, and Reddit through the lens of trust. We find that higher levels of trust within a network are associated with more spread of misinformation (a form of cybersecurity threat, according to the US CISA) on Twitter. We also find that higher levels of trust in open source developer networks on GitHub are associated with more frequent incidences of cybersecurity vulnerabilities. Using our experimental and empirical findings previously described, we apply the Systems Engineering Process to design and prototype a trust management tool for use on Reddit, which we dub Coni the Trust Moderating Bot. Coni is, to the best of our knowledge, the first trust management tool designed specifically for use on the Reddit platform. Through our work with Coni, we develop and present a blueprint for constructing a Reddit trust tool which not only measures trust levels, but can use these trust levels to take actions on Reddit to improve the quality of submissions within the community (a subreddit)
    • …
    corecore