10,229 research outputs found

    Fully Automated Fact Checking Using External Sources

    Full text link
    Given the constantly growing proliferation of false claims online in recent years, there has been also a growing research interest in automatically distinguishing false rumors from factually true claims. Here, we propose a general-purpose framework for fully-automatic fact checking using external sources, tapping the potential of the entire Web as a knowledge source to confirm or reject a claim. Our framework uses a deep neural network with LSTM text encoding to combine semantic kernels with task-specific embeddings that encode a claim together with pieces of potentially-relevant text fragments from the Web, taking the source reliability into account. The evaluation results show good performance on two different tasks and datasets: (i) rumor detection and (ii) fact checking of the answers to a question in community question answering forums.Comment: RANLP-201

    False News On Social Media: A Data-Driven Survey

    Full text link
    In the past few years, the research community has dedicated growing interest to the issue of false news circulating on social networks. The widespread attention on detecting and characterizing false news has been motivated by considerable backlashes of this threat against the real world. As a matter of fact, social media platforms exhibit peculiar characteristics, with respect to traditional news outlets, which have been particularly favorable to the proliferation of deceptive information. They also present unique challenges for all kind of potential interventions on the subject. As this issue becomes of global concern, it is also gaining more attention in academia. The aim of this survey is to offer a comprehensive study on the recent advances in terms of detection, characterization and mitigation of false news that propagate on social media, as well as the challenges and the open questions that await future research on the field. We use a data-driven approach, focusing on a classification of the features that are used in each study to characterize false information and on the datasets used for instructing classification methods. At the end of the survey, we highlight emerging approaches that look most promising for addressing false news

    Data centric trust evaluation and prediction framework for IOT

    Get PDF
    © 2017 ITU. Application of trust principals in internet of things (IoT) has allowed to provide more trustworthy services among the corresponding stakeholders. The most common method of assessing trust in IoT applications is to estimate trust level of the end entities (entity-centric) relative to the trustor. In these systems, trust level of the data is assumed to be the same as the trust level of the data source. However, most of the IoT based systems are data centric and operate in dynamic environments, which need immediate actions without waiting for a trust report from end entities. We address this challenge by extending our previous proposals on trust establishment for entities based on their reputation, experience and knowledge, to trust estimation of data items [1-3]. First, we present a hybrid trust framework for evaluating both data trust and entity trust, which will be enhanced as a standardization for future data driven society. The modules including data trust metric extraction, data trust aggregation, evaluation and prediction are elaborated inside the proposed framework. Finally, a possible design model is described to implement the proposed ideas

    Credibility Evaluation of User-generated Content using Novel Multinomial Classification Technique

    Get PDF
    Awareness about the features of the internet, easy access to data using mobile, and affordable data facilities have caused a lot of traffic on the internet. Digitization came with a lot of opportunities and challenges as well. One of the important advantages of digitization is paperless transactions, and transparency in payment, while data privacy, fake news, and cyber-attacks are the evolving challenges. The extensive use of social media networks and e-commerce websites has caused a lot of user-generated information, misinformation, and disinformation on the Internet. The quality of information depends upon various stages (of information) like generation of information, medium of propagation, and consumption of information. Content being user-generated, information needs a quality assessment before consumption. The loss of information is also necessary to be examined by applying the machine learning approach as the volume of content is extremely huge. This research work focuses on novel multinomial classification (based on multinoulli distribution) techniques to determine the quality of the information in the given content. To evaluate the information content a single algorithm with some processing is not sufficient and various approaches are necessary to evaluate the quality of content.  We propose a novel approach to calculate the bias, for which the Machine Learning model will be fitted appropriately to classify the content correctly. As an empirical study, rotten tomatoes’ movie review data set is used to apply the classification techniques. The accuracy of the system is evaluated using the ROC curve, confusion matrix, and MAP

    Semantic web learning technology design: addressing pedagogical challenges and precarious futures

    Get PDF
    Semantic web technologies have the potential to extend and transform teaching and learning, particularly in those educational settings in which learners are encouraged to engage with ‘authentic’ data from multiple sources. In the course of the ‘Ensemble’ project, teachers and learners in different disciplinary contexts in UK Higher Education worked with educational researchers and technologists to explore the potential of such technologies through participatory design and rapid prototyping. These activities exposed some of the barriers to the development and adoption of emergent learning technologies, but also highlighted the wide range of factors, not all of them technological or pedagogical, that might contribute to enthusiasm for and adoption of such technologies. This suggests that the scope and purpose of research and design activities may need to be broadened and the paper concludes with a discussion of how the tradition of operaismo or ‘workers’ enquiry’ may help to frame such activities. This is particularly relevant in a period when the both educational institutions and the working environments for which learners are being prepared are becoming increasingly fractured, and some measure of ‘precarity’ is increasingly the norm
    • 

    corecore