187 research outputs found

    Exploiting context for rumour detection in social media

    Get PDF
    Tools that are able to detect unverified information posted on social media during a news event can help to avoid the spread of rumours that turn out to be false. In this paper we compare a novel approach using Conditional Random Fields that learns from the sequential dynamics of social media posts with the current state-of-the-art rumour detection system, as well as other baselines. In contrast to existing work, our classifier does not need to observe tweets querying the stance of a post to deem it a rumour but, instead, exploits context learned during the event. Our classifier has improved precision and recall over the state-of-the-art classifier that relies on querying tweets, as well as outperforming our best baseline. Moreover, the results provide evidence for the generalisability of our classifier

    Event Based Rumor Detection on Social Media for Digital Forensics and Information Security

    Get PDF
    Advancement in information technology such as social networking is on one side is powerful source of news and information and on other side have posed new challenges for those policing cybercrime. Cybercriminals and terrorists are spreading rumors that is unreal or even malicious information on social network which can bring massive panic and social unrest to our community. The rumor detection problem on social network has attracted considerable attention in recent years. A different type of rumors has different characteristics and need different techniques and approaches to detect. In this paper, we proposed an efficient approach to detect event based rumor on social media like Twitter. Experiment illustrates that our event based rumor detection method obtain significant improvement compared with the previous work

    Context-Aware Message-Level Rumour Detection with Weak Supervision

    Get PDF
    Social media has become the main source of all sorts of information beyond a communication medium. Its intrinsic nature can allow a continuous and massive flow of misinformation to make a severe impact worldwide. In particular, rumours emerge unexpectedly and spread quickly. It is challenging to track down their origins and stop their propagation. One of the most ideal solutions to this is to identify rumour-mongering messages as early as possible, which is commonly referred to as "Early Rumour Detection (ERD)". This dissertation focuses on researching ERD on social media by exploiting weak supervision and contextual information. Weak supervision is a branch of ML where noisy and less precise sources (e.g. data patterns) are leveraged to learn limited high-quality labelled data (Ratner et al., 2017). This is intended to reduce the cost and increase the efficiency of the hand-labelling of large-scale data. This thesis aims to study whether identifying rumours before they go viral is possible and develop an architecture for ERD at individual post level. To this end, it first explores major bottlenecks of current ERD. It also uncovers a research gap between system design and its applications in the real world, which have received less attention from the research community of ERD. One bottleneck is limited labelled data. Weakly supervised methods to augment limited labelled training data for ERD are introduced. The other bottleneck is enormous amounts of noisy data. A framework unifying burst detection based on temporal signals and burst summarisation is investigated to identify potential rumours (i.e. input to rumour detection models) by filtering out uninformative messages. Finally, a novel method which jointly learns rumour sources and their contexts (i.e. conversational threads) for ERD is proposed. An extensive evaluation setting for ERD systems is also introduced

    Tackling Social Value Tasks with Multilingual NLP

    Get PDF
    In recent years, deep learning applications have shown promise in tackling social value tasks such as hate speech and misinformation in social media. Neural networks provide an efficient automated solution that has replaced hand-engineered systems. Existing studies that have explored building resources, e.g. datasets, models, and NLP solutions, have yielded significant performance. However, most of these systems are limited to providing solutions in only English, neglecting the bulk of hateful and misinformation content that is generated in other languages, particularly so-called low-resource languages that have a low amount of labeled or unlabeled language data for training machine learning models (e.g. Turkish). This limitation is due to the lack of a large collection of labeled or unlabeled corpora or manually crafted linguistic resources sufficient for building NLP systems in these languages. In this thesis, we set out to explore solutions for low-resource languages to mitigate the language gap in NLP systems for social value tasks. This thesis studies two tasks. First, we show that developing an automated classifier that captures hate speech and nuances in a low-resource language variety with limited data is extremely challenging. To tackle this, we propose HateMAML, a model-agnostic meta-learning-based framework that effectively performs hate speech detection in low resource languages. The proposed method uses a self-supervision strategy to overcome the limitation of data scarcity and produces a better pre-trained model for fast adaptation to an unseen target language. Second, this thesis aims to address the research gaps in rumour detection by proposing a modification over the standard Transformer and building on a multilingual pre-trained language model to perform rumour detection in multiple languages. Specifically, our proposed model MUSCAT prioritizes the source claims in multilingual conversation threads with co-attention transformers. Both of these methods can be seen as the incorporation of efficient transfer learning methods to mitigate issues in model training with small data. The findings yield accurate and efficient transfer learning models for low-resource languages. The results show that our proposed approaches outperform the state-of-the-art baselines in the cross-domain multilingual transfer setting. We also conduct ablation studies to analyze the characteristics of proposed solutions and provided empirical analysis outlining the challenges of data collection to performing detection tasks in multiple languages

    Interpretable rumor detection in microblogs by attending to user interactions

    Get PDF
    We address rumor detection by learning to differentiate between the community's response to real and fake claims in microblogs. Existing state-of-the-art models are based on tree models that model conversational trees. However, in social media, a user posting a reply might be replying to the entire thread rather than to a specific user. We propose a post-level attention model (PLAN) to model long distance interactions between tweets with the multi-head attention mechanism in a transformer network. We investigated variants of this model: (1) a structure aware self-attention model (StA-PLAN) that incorporates tree structure information in the transformer network, and (2) a hierarchical token and post-level attention model (StA-HiTPLAN) that learns a sentence representation with token-level self-attention. To the best of our knowledge, we are the first to evaluate our models on two rumor detection data sets: the PHEME data set as well as the Twitter15 and Twitter16 data sets. We show that our best models outperform current state-of-the-art models for both data sets. Moreover, the attention mechanism allows us to explain rumor detection predictions at both token-level and post-level

    False News On Social Media: A Data-Driven Survey

    Full text link
    In the past few years, the research community has dedicated growing interest to the issue of false news circulating on social networks. The widespread attention on detecting and characterizing false news has been motivated by considerable backlashes of this threat against the real world. As a matter of fact, social media platforms exhibit peculiar characteristics, with respect to traditional news outlets, which have been particularly favorable to the proliferation of deceptive information. They also present unique challenges for all kind of potential interventions on the subject. As this issue becomes of global concern, it is also gaining more attention in academia. The aim of this survey is to offer a comprehensive study on the recent advances in terms of detection, characterization and mitigation of false news that propagate on social media, as well as the challenges and the open questions that await future research on the field. We use a data-driven approach, focusing on a classification of the features that are used in each study to characterize false information and on the datasets used for instructing classification methods. At the end of the survey, we highlight emerging approaches that look most promising for addressing false news
    • …
    corecore