249 research outputs found

    Overview of the CLEF-2019 Checkthat! LAB: Automatic identification and verification of claims. Task 2: Evidence and factuality

    Get PDF
    We present an overview of Task 2 of the second edition of the CheckThat! Lab at CLEF 2019. Task 2 asked (A) to rank a given set of Web pages with respect to a check-worthy claim based on their usefulness for fact-checking that claim, (B) to classify these same Web pages according to their degree of usefulness for fact-checking the target claim, (C) to identify useful passages from these pages, and (D) to use the useful pages to predict the claim's factuality. Task 2 at CheckThat! provided a full evaluation framework, consisting of data in Arabic (gathered and annotated from scratch) and evaluation based on normalized discounted cumulative gain (nDCG) for ranking, and F1 for classification. Four teams submitted runs. The most successful approach to subtask A used learning-to-rank, while different classifiers were used in the other subtasks. We release to the research community all datasets from the lab as well as the evaluation scripts, which should enable further research in the important task of evidence-based automatic claim verification

    Overview of the CLEF-2018 CheckThat! Lab on Automatic Identification and Verification of Political Claims. Task 2: Factuality

    Get PDF
    We present an overview of the CLEF-2018 CheckThat! Lab on Automatic Identification and Verification of Political Claims, with focus on Task 2: Factuality. The task asked to assess whether a given check-worthy claim made by a politician in the context of a debate/speech is factually true, half-true, or false. In terms of data, we focused on debates from the 2016 US Presidential Campaign, as well as on some speeches during and after the campaign (we also provided translations in Arabic), and we relied on comments and factuality judgments from factcheck.org and snopes.com, which we further refined manually. A total of 30 teams registered to participate in the lab, and five of them actually submitted runs. The most successful approaches used by the participants relied on the automatic retrieval of evidence from the Web. Similarities and other relationships between the claim and the retrieved documents were used as input to classifiers in order to make a decision. The best-performing official submissions achieved mean absolute error of .705 and .658 for the English and for the Arabic test sets, respectively. This leaves plenty of room for further improvement, and thus we release all datasets and the scoring scripts, which should enable further research in fact-checking

    Challenging others when posting misinformation: a UK vs. Arab cross-cultural comparison on the perception of negative consequences and injunctive norms

    Get PDF
    This study investigates the factors influencing the willingness to challenge misinformation on social media across two cultural contexts, the United Kingdom (UK) and Arab countries. A total of 462 participants completed an online survey (250 UK, 212 Arabs). The analysis revealed that three types of negative consequences (relationship cost, negative impact on the person being challenged, futility) and also injunctive norms influence the willingness to challenge misinformation. Cross-cultural comparisons using t-tests showed significant differences between the UK and the Arab countries in all factors except the injunctive norms. Multiple regression analyses identified differences between the UK and Arab participants concerning which of the factors predicted the willingness to challenge misinformation. The findings suggest that participants’ self-reported injunctive norms play a significant role in shaping their willingness to engage in corrective actions across both cultural contexts. Moreover, UK participants’ reporting of how others perceive negative impact on the person being challenged and injunctive norms were significant predictors, while for the Arabs, only the perceived relationship costs emerged as a significant predictor. This study has important implications for policymakers and social media platforms in developing culturally sensitive interventions encouraging users to correct misinformation

    Overview of the CLEF-2018 checkthat! lab on automatic identification and verification of political claims

    Get PDF
    We present an overview of the CLEF-2018 CheckThat! Lab on Automatic Identification and Verification of Political Claims. In its starting year, the lab featured two tasks. Task 1 asked to predict which (potential) claims in a political debate should be prioritized for fact-checking; in particular, given a debate or a political speech, the goal was to produce a ranked list of its sentences based on their worthiness for fact-checking. Task 2 asked to assess whether a given check-worthy claim made by a politician in the context of a debate/speech is factually true, half-true, or false. We offered both tasks in English and in Arabic. In terms of data, for both tasks, we focused on debates from the 2016 US Presidential Campaign, as well as on some speeches during and after the campaign (we also provided translations in Arabic), and we relied on comments and factuality judgments from factcheck.org and snopes.com, which we further refined manually. A total of 30 teams registered to participate in the lab, and 9 of them actually submitted runs. The evaluation results show that the most successful approaches used various neural networks (esp. for Task 1) and evidence retrieval from the Web (esp. for Task 2). We release all datasets, the evaluation scripts, and the submissions by the participants, which should enable further research in both check-worthiness estimation and automatic claim verification

    Overview of the CLEF-2018 CheckThat! Lab on Automatic Identification and Verification of Political Claims. Task 1: Check-Worthiness

    Get PDF
    We present an overview of the CLEF-2018 CheckThat! Lab on Automatic Identification and Verification of Political Claims, with focus on Task 1: Check-Worthiness. The task asks to predict which claims in a political debate should be prioritized for fact-checking. In particular, given a debate or a political speech, the goal was to produce a ranked list of its sentences based on their worthiness for fact checking. We offered the task in both English and Arabic, based on debates from the 2016 US Presidential Campaign, as well as on some speeches during and after the campaign. A total of 30 teams registered to participate in the Lab and seven teams actually submitted systems for Task 1. The most successful approaches used by the participants relied on recurrent and multi-layer neural networks, as well as on combinations of distributional representations, on matchings claims' vocabulary against lexicons, and on measures of syntactic dependency. The best systems achieved mean average precision of 0.18 and 0.15 on the English and on the Arabic test datasets, respectively. This leaves large room for further improvement, and thus we release all datasets and the scoring scripts, which should enable further research in check-worthiness estimation

    NewsClaims: A New Benchmark for Claim Detection from News with Attribute Knowledge

    Full text link
    Claim detection and verification are crucial for news understanding and have emerged as promising technologies for mitigating news misinformation. However, most existing work has focused on claim sentence analysis while overlooking crucial background attributes (e.g., claimer, claim objects). In this work, we present NewsClaims, a new benchmark for knowledge-aware claim detection in the news domain. We redefine the claim detection problem to include extraction of additional background attributes related to each claim and release 889 claims annotated over 143 news articles. NewsClaims aims to benchmark claim detection systems in emerging scenarios, comprising unseen topics with little or no training data. To this end, we provide a comprehensive evaluation of zero-shot and prompt-based baselines for NewsClaims.Comment: Preprin

    Why do we not stand up to misinformation? Factors influencing the likelihood of challenging misinformation on social media and the role of demographics

    Get PDF
    This study investigates the barriers to challenging others who post misinformation on social media platforms. We conducted a survey amongst U.K. Facebook users (143 (57.2 %) women, 104 (41.6 %) men) to assess the extent to which the barriers to correcting others, as identified in literature across disciplines, apply to correcting misinformation on social media. We also group the barriers into factors and explore demographic differences amongst them. It has been suggested that users are generally hesitant to challenge misinformation. We found that most of our participants (58.8 %) were reluctant to challenge misinformation. We also identified moderating roles of age and gender in the likelihood of challenging misinformation. Older people were more likely to challenge misinformation compared to young adults while, men demonstrated a slightly greater likelihood to challenge compared to women. The 20 barriers influencing the decision to challenge misinformation, were then grouped into four main factors: social concerns, effort/interest considerations, prosocial intents, and content-related factors. We found that, controlling for age and gender, “social concerns” and “effort/interest considerations” have the significant impact on likelihood to challenge. Identified four factors were analysed in terms of demographic differences. Men ranked “effort/interest considerations” higher than women, while women placed higher importance on “content-related factors”. Moreover, older individuals were found to be more resilient to “social concerns”. The influence of educational background was most prominent in ranking “content-related factors”. Our findings provide important insights for the design of future interventions aimed at encouraging the challenging of misinformation on social media platforms, highlighting the need for tailored, demographically sensitive approaches

    SentiBench - a benchmark comparison of state-of-the-practice sentiment analysis methods

    Get PDF
    In the last few years thousands of scientific papers have investigated sentiment analysis, several startups that measure opinions on real data have emerged and a number of innovative products related to this theme have been developed. There are multiple methods for measuring sentiments, including lexical-based and supervised machine learning methods. Despite the vast interest on the theme and wide popularity of some methods, it is unclear which one is better for identifying the polarity (i.e., positive or negative) of a message. Accordingly, there is a strong need to conduct a thorough apple-to-apple comparison of sentiment analysis methods, \textit{as they are used in practice}, across multiple datasets originated from different data sources. Such a comparison is key for understanding the potential limitations, advantages, and disadvantages of popular methods. This article aims at filling this gap by presenting a benchmark comparison of twenty-four popular sentiment analysis methods (which we call the state-of-the-practice methods). Our evaluation is based on a benchmark of eighteen labeled datasets, covering messages posted on social networks, movie and product reviews, as well as opinions and comments in news articles. Our results highlight the extent to which the prediction performance of these methods varies considerably across datasets. Aiming at boosting the development of this research area, we open the methods' codes and datasets used in this article, deploying them in a benchmark system, which provides an open API for accessing and comparing sentence-level sentiment analysis methods
    • …
    corecore