261 research outputs found

    Machine Learning Explanations to Prevent Overtrust in Fake News Detection

    Full text link
    Combating fake news and misinformation propagation is a challenging task in the post-truth era. News feed and search algorithms could potentially lead to unintentional large-scale propagation of false and fabricated information with users being exposed to algorithmically selected false content. Our research investigates the effects of an Explainable AI assistant embedded in news review platforms for combating the propagation of fake news. We design a news reviewing and sharing interface, create a dataset of news stories, and train four interpretable fake news detection algorithms to study the effects of algorithmic transparency on end-users. We present evaluation results and analysis from multiple controlled crowdsourced studies. For a deeper understanding of Explainable AI systems, we discuss interactions between user engagement, mental model, trust, and performance measures in the process of explaining. The study results indicate that explanations helped participants to build appropriate mental models of the intelligent assistants in different conditions and adjust their trust accordingly for model limitations

    Detection of Truthful, Semi-Truthful, False and Other News with Arbitrary Topics Using BERT-Based Models

    Get PDF
    Easy and uncontrolled access to the Internet provokes the wide propagation of false information, which freely circulates in the Internet. Researchers usually solve the problem of fake news detection (FND) in the framework of a known topic and binary classification. In this paper we study possibilities of BERT-based models to detect fake news in news flow with unknown topics and four categories: true, semi-true, false and other. The object of consideration is the dataset CheckThat! Lab proposed for the conference CLEF-2022. The subjects of consideration are the models SBERT, RoBERTa, and mBERT. To improve the quality of classification we use two methods: the addition of a known dataset (LIAR), and the combination of several classes (true + semi-true, false + semi-true). The results outperform the existing achievements, although the state-of-the-art in the FND area is still far from practical applications

    A Network Topology Approach to Bot Classification

    Full text link
    Automated social agents, or bots, are increasingly becoming a problem on social media platforms. There is a growing body of literature and multiple tools to aid in the detection of such agents on online social networking platforms. We propose that the social network topology of a user would be sufficient to determine whether the user is a automated agent or a human. To test this, we use a publicly available dataset containing users on Twitter labelled as either automated social agent or human. Using an unsupervised machine learning approach, we obtain a detection accuracy rate of 70%

    A Multidisciplinary Design and Evaluation Framework for Explainable AI Systems

    Get PDF
    Nowadays, algorithms analyze user data and affect the decision-making process for millions of people on matters like employment, insurance and loan rates, and even criminal justice. However, these algorithms that serve critical roles in many industries have their own biases that can result in discrimination and unfair decision-making. Explainable Artificial Intelligence (XAI) systems can be a solution to predictable and accountable AI by explaining AI decision-making processes for end users and therefore increase user awareness and prevent bias and discrimination. The broad spectrum of research on XAI, including designing interpretable models, explainable user interfaces, and human-subject studies of XAI systems are sought in different disciplines such as machine learning, human-computer interactions (HCI), and visual analytics. The mismatch in objectives for the scholars to define, design, and evaluate the concept of XAI may slow down the overall advances of end-to-end XAI systems. My research aims to converge knowledge behind design and evaluation of XAI systems between multiple disciplines to further support key benefits of algorithmic transparency and interpretability. To this end, I propose a comprehensive design and evaluation framework for XAI systems with step-by-step guidelines to pair different design goals with their evaluation methods for iterative system design cycles in multidisciplinary teams. This dissertation presents a comprehensive XAI design and evaluation framework to provide guidance for different design goals and evaluation approaches in XAI systems. After a thorough review of XAI research in the fields of machine learning, visualization, and HCI, I present a categorization of XAI design goals and evaluation methods and show a mapping between design goals for different XAI user groups and their evaluation methods. From my findings, I present a design and evaluation framework for XAI systems (Objective 1) to address the relation between different system design needs. The framework provides recommendations for different goals and ready-to-use tables of evaluation methods for XAI systems. The importance of this framework is in providing guidance for researchers on different aspects of XAI system design in multidisciplinary team efforts. Then, I demonstrate and validate the proposed framework (Objective 2) through one end-to-end XAI system case study and two examples by analysis of previous XAI systems in terms of our framework. I present two contributions to my XAI design and evaluation framework to improve evaluation methods for XAI system
    corecore