109 research outputs found

    RumourEval 2019: Determining rumour veracity and support for rumours

    Get PDF
    This is the proposal for RumourEval-2019, which will run in early 2019 as part of that year's SemEval event. Since the first RumourEval shared task in 2017, interest in automated claim validation has greatly increased, as the dangers of "fake news" have become a mainstream concern. Yet automated support for rumour checking remains in its infancy. For this reason, it is important that a shared task in this area continues to provide a focus for effort, which is likely to increase. We therefore propose a continuation in which the veracity of further rumours is determined, and as previously, supportive of this goal, tweets discussing them are classified according to the stance they take regarding the rumour. Scope is extended compared with the first RumourEval, in that the dataset is substantially expanded to include Reddit as well as Twitter data, and additional languages are also included

    Measuring what counts : the case of rumour stance classification

    Get PDF
    Stance classification can be a powerful tool for understanding whether and which users believe in online rumours. The task aims to automatically predict the stance of replies towards a given rumour, namely support, deny, question, or comment. Numerous methods have been proposed and their performance compared in the RumourEval shared tasks in 2017 and 2019. Results demonstrated that this is a challenging problem since naturally occurring rumour stance data is highly imbalanced. This paper specifically questions the evaluation metrics used in these shared tasks. We re-evaluate the systems submitted to the two RumourEval tasks and show that the two widely adopted metrics – accuracy and macro-F1 – are not robust for the four-class imbalanced task of rumour stance classification, as they wrongly favour systems with highly skewed accuracy towards the majority class. To overcome this problem, we propose new evaluation metrics for rumour stance detection. These are not only robust to imbalanced data but also score higher systems that are capable of recognising the two most informative minority classes (support and deny)

    Rumour stance and veracity classification in social media conversations

    Get PDF
    Social media platforms are popular as sources of news, often delivering updates faster than traditional news outlets. The absence of verification of the posted information leads to wide proliferation of misinformation. The effects of propagation of such false information can have far-reaching consequences on society. Traditional manual verification by fact-checking professionals is not scalable to the amount of misinformation being spread. Therefore there is a need for an automated verification tool that would assist the process of rumour resolution. In this thesis we address the problem of rumour verification in social media conversations from a machine learning perspective. Rumours that attract a lot of scepticism in the form of questions and denials among the responses are more likely to be proven false later (Zhao et al., 2015). Thus we explore how crowd wisdom in the form of the stance of responses towards a rumour can contribute to an automated rumour verification system. We study the ways of determining the stance of each response in a conversation automatically. We focus on the importance of incorporating conversation structure into stance classification models and also identifying characteristics of supporting, denying, questioning and commenting posts. We follow by proposing several models for rumour veracity classification that incorporate different feature sets, including the stance of the responses, attempting to find the set that would lead to the most accurate models across several datasets. We view the rumour resolution process as a sequence of tasks: rumour detection, tracking, stance classification and, finally, rumour verification. We then study relations between the tasks in the rumour verification pipeline through a joint learning approach, showing its benefits comparing to single-task learning. Finally, we address the issue of transparency of model decisions by incorporating uncertainty estimation methods into rumour verification models. We then conclude and point directions for future research
    • …
    corecore