27,331 research outputs found

    Fact Checking in Community Forums

    Full text link
    Community Question Answering (cQA) forums are very popular nowadays, as they represent effective means for communities around particular topics to share information. Unfortunately, this information is not always factual. Thus, here we explore a new dimension in the context of cQA, which has been ignored so far: checking the veracity of answers to particular questions in cQA forums. As this is a new problem, we create a specialized dataset for it. We further propose a novel multi-faceted model, which captures information from the answer content (what is said and how), from the author profile (who says it), from the rest of the community forum (where it is said), and from external authoritative sources of information (external support). Evaluation results show a MAP value of 86.54, which is 21 points absolute above the baseline.Comment: AAAI-2018; Fact-Checking; Veracity; Community-Question Answering; Neural Networks; Distributed Representation

    Fully Automated Fact Checking Using External Sources

    Full text link
    Given the constantly growing proliferation of false claims online in recent years, there has been also a growing research interest in automatically distinguishing false rumors from factually true claims. Here, we propose a general-purpose framework for fully-automatic fact checking using external sources, tapping the potential of the entire Web as a knowledge source to confirm or reject a claim. Our framework uses a deep neural network with LSTM text encoding to combine semantic kernels with task-specific embeddings that encode a claim together with pieces of potentially-relevant text fragments from the Web, taking the source reliability into account. The evaluation results show good performance on two different tasks and datasets: (i) rumor detection and (ii) fact checking of the answers to a question in community question answering forums.Comment: RANLP-201

    Journalistic interventions: The structural factors affecting the global emergence of fact-checking

    Full text link
    Since the emergence of FactCheck.org in the United States in 2003, fact-checking interventions have expanded both domestically and globally. The Duke Reporter’s Lab identified nearly 100 active initiatives around the world in 2016. Building off of previous exploratory work by Amazeen, this research utilizes the framework of critical juncture theory to examine why fact-checking interventions are spreading globally at this point in time. Seen as a professional reform movement in the journalistic community, historical research on reform movements suggests several possible factors influencing the emergence of fact-checking such as a decline in journalism, easy access to technology for the masses, and socio-political strife. This study offers empirical support that fact-checking may be understood as a democracy-building tool that emerges where democratic institutions are perceived to be weak or are under threat and examines similarities between the growth of fact-checking interventions and previous consumer reform movements. As politics increasingly adopts strategies orchestrated by marketing and advertising consultants and agencies – exemplified in the Brexit referendum – political fact-checking may benefit from examining the path of consumer reform movements. For, before fact-checking can be effective at informing individuals, it must first establish itself within a structural environment

    Automated Fact Checking in the News Room

    Get PDF
    Fact checking is an essential task in journalism; its importance has been highlighted due to recently increased concerns and efforts in combating misinformation. In this paper, we present an automated fact-checking platform which given a claim, it retrieves relevant textual evidence from a document collection, predicts whether each piece of evidence supports or refutes the claim, and returns a final verdict. We describe the architecture of the system and the user interface, focusing on the choices made to improve its user-friendliness and transparency. We conduct a user study of the fact-checking platform in a journalistic setting: we integrated it with a collection of news articles and provide an evaluation of the platform using feedback from journalists in their workflow. We found that the predictions of our platform were correct 58\% of the time, and 59\% of the returned evidence was relevant

    Computational fact checking from knowledge networks

    Get PDF
    Traditional fact checking by expert journalists cannot keep up with the enormous volume of information that is now generated online. Computational fact checking may significantly enhance our ability to evaluate the veracity of dubious information. Here we show that the complexities of human fact checking can be approximated quite well by finding the shortest path between concept nodes under properly defined semantic proximity metrics on knowledge graphs. Framed as a network problem this approach is feasible with efficient computational techniques. We evaluate this approach by examining tens of thousands of claims related to history, entertainment, geography, and biographical information using a public knowledge graph extracted from Wikipedia. Statements independently known to be true consistently receive higher support via our method than do false ones. These findings represent a significant step toward scalable computational fact-checking methods that may one day mitigate the spread of harmful misinformation

    Practitioner perceptions: critical junctures and the global emergence and challenges of fact-checking

    Full text link
    Since 2003 and the emergence of FactCheck.org in the United States, fact-checking has expanded both domestically and internationally. As of February, 2016, the Duke Reporter’s Lab identified nearly 100 active initiatives around the world. This research explores why fact-checking is spreading globally at this point in time. Seen as a professional reform movement in the journalistic community (Graves, 2016), historical research on reform movements suggest several possible factors influencing the emergence of fact-checking including a decline in journalism, easy access to technology for the masses, and socio-political strife (McChesney, 2007; Pickard, 2015; Stole, 2006). Using a phenomenological approach, two focus groups were conducted among fact-checkers during the 2015 Global Fact-checking Summit in London, England. Participants shared rich experiences about conditions and contexts surrounding the emergence and challenges facing their organizations. Ultimately, as the purpose of this research is to help future fact-checkers around the world become aware of the circumstances under which fact-checking is most likely to emerge and thrive (or fail), recommendations from current global practitioners are offered.Accepted manuscrip

    Finding Streams in Knowledge Graphs to Support Fact Checking

    Full text link
    The volume and velocity of information that gets generated online limits current journalistic practices to fact-check claims at the same rate. Computational approaches for fact checking may be the key to help mitigate the risks of massive misinformation spread. Such approaches can be designed to not only be scalable and effective at assessing veracity of dubious claims, but also to boost a human fact checker's productivity by surfacing relevant facts and patterns to aid their analysis. To this end, we present a novel, unsupervised network-flow based approach to determine the truthfulness of a statement of fact expressed in the form of a (subject, predicate, object) triple. We view a knowledge graph of background information about real-world entities as a flow network, and knowledge as a fluid, abstract commodity. We show that computational fact checking of such a triple then amounts to finding a "knowledge stream" that emanates from the subject node and flows toward the object node through paths connecting them. Evaluation on a range of real-world and hand-crafted datasets of facts related to entertainment, business, sports, geography and more reveals that this network-flow model can be very effective in discerning true statements from false ones, outperforming existing algorithms on many test cases. Moreover, the model is expressive in its ability to automatically discover several useful path patterns and surface relevant facts that may help a human fact checker corroborate or refute a claim.Comment: Extended version of the paper in proceedings of ICDM 201

    Revisiting the epistemology of fact-checking

    Full text link
    Joseph E. Uscinski and Ryden W. Butler (2013) argue that fact-checking should be condemned to the dustbin of history because the methods fact-checkers use to select statements, consider evidence, and render judgment fail to stand up to the rigors of scientific inquiry and threaten to stifle political debate. However, the premises upon which they build their arguments are flawed. By sampling from multiple “fact-checking agencies” that do not practice fact-checking on a regular basis in a consistent manner, they perpetuate the selection effects they criticize and thus undermine their own position. Furthermore, not only do their arguments suffer from overgeneralization, they fail to offer empirical quantification to support some of their anecdotal criticisms. This rejoinder offers a study demonstrating a high level of consistency in fact-checking and argues that as long as unambiguous practices of deception continue, fact-checking has an important role to play in the United States and around the world

    Estimating Fact-checking's Effects: Evidence From a Long-term Experiment During Campaign 2014

    Get PDF
    This study reports the first experimental estimates of the longitudinal effects of exposure to fact-checking. We also conduct a comprehensive panel study of attitudes toward fact-checking and how they change during a campaign.Our results are generally encouraging. The public has very positive views of fact-checking and, when randomly exposed to it, comes to view the format even more favorably. Moreover, randomized exposure to fact-checks helps people become better informed, substantially increasing knowledge of the issues under discussion.We also document several important challenges facing fact-checkers, however. Most notably, interest in the format is skewed towards more educated and informed members of the public. Republicans also have less favorable views of the practice than Democrats. Continued growth of the medium will depend on broadening its appeal to these groups

    Hoaxy: A Platform for Tracking Online Misinformation

    Full text link
    Massive amounts of misinformation have been observed to spread in uncontrolled fashion across social media. Examples include rumors, hoaxes, fake news, and conspiracy theories. At the same time, several journalistic organizations devote significant efforts to high-quality fact checking of online claims. The resulting information cascades contain instances of both accurate and inaccurate information, unfold over multiple time scales, and often reach audiences of considerable size. All these factors pose challenges for the study of the social dynamics of online news sharing. Here we introduce Hoaxy, a platform for the collection, detection, and analysis of online misinformation and its related fact-checking efforts. We discuss the design of the platform and present a preliminary analysis of a sample of public tweets containing both fake news and fact checking. We find that, in the aggregate, the sharing of fact-checking content typically lags that of misinformation by 10--20 hours. Moreover, fake news are dominated by very active users, while fact checking is a more grass-roots activity. With the increasing risks connected to massive online misinformation, social news observatories have the potential to help researchers, journalists, and the general public understand the dynamics of real and fake news sharing.Comment: 6 pages, 6 figures, submitted to Third Workshop on Social News On the We
    corecore