66,067 research outputs found

    Distilling Information Reliability and Source Trustworthiness from Digital Traces

    Full text link
    Online knowledge repositories typically rely on their users or dedicated editors to evaluate the reliability of their content. These evaluations can be viewed as noisy measurements of both information reliability and information source trustworthiness. Can we leverage these noisy evaluations, often biased, to distill a robust, unbiased and interpretable measure of both notions? In this paper, we argue that the temporal traces left by these noisy evaluations give cues on the reliability of the information and the trustworthiness of the sources. Then, we propose a temporal point process modeling framework that links these temporal traces to robust, unbiased and interpretable notions of information reliability and source trustworthiness. Furthermore, we develop an efficient convex optimization procedure to learn the parameters of the model from historical traces. Experiments on real-world data gathered from Wikipedia and Stack Overflow show that our modeling framework accurately predicts evaluation events, provides an interpretable measure of information reliability and source trustworthiness, and yields interesting insights about real-world events.Comment: Accepted at 26th World Wide Web conference (WWW-17

    Trust Building Mechanisms in Online Health Communities and Their Impact on Information Adoption and Close Relationship Formation

    Get PDF
    This study methodologically replicates Fan and Lederman’s (2018) work on trust formation mechanisms in online health communities (OHCs). Social capital theory sets the framework for the research. Contextualized in online health communities (OHCs), it is the content contributors’ task to demonstrate trustworthiness by showing the credibility of the posted content in their previous postings. In contrast, the recipients, rather than the contributors, have to initially perceive trustworthiness in the sense of traditional social capital theory. We adopted the model, hypotheses, measurement, and statistical methods from the original study conducted by Fan and Lederman in 2018. Three out of nine hypotheses in our replication are not consistent with the original study results. The inconsistencies primarily lie in the antecedents of two types of trust. We discuss possible explanations for these discrepancies and suggest additional data and statistical tests to validate our replication results

    POISED: Spotting Twitter Spam Off the Beaten Paths

    Get PDF
    Cybercriminals have found in online social networks a propitious medium to spread spam and malicious content. Existing techniques for detecting spam include predicting the trustworthiness of accounts and analyzing the content of these messages. However, advanced attackers can still successfully evade these defenses. Online social networks bring people who have personal connections or share common interests to form communities. In this paper, we first show that users within a networked community share some topics of interest. Moreover, content shared on these social network tend to propagate according to the interests of people. Dissemination paths may emerge where some communities post similar messages, based on the interests of those communities. Spam and other malicious content, on the other hand, follow different spreading patterns. In this paper, we follow this insight and present POISED, a system that leverages the differences in propagation between benign and malicious messages on social networks to identify spam and other unwanted content. We test our system on a dataset of 1.3M tweets collected from 64K users, and we show that our approach is effective in detecting malicious messages, reaching 91% precision and 93% recall. We also show that POISED's detection is more comprehensive than previous systems, by comparing it to three state-of-the-art spam detection systems that have been proposed by the research community in the past. POISED significantly outperforms each of these systems. Moreover, through simulations, we show how POISED is effective in the early detection of spam messages and how it is resilient against two well-known adversarial machine learning attacks

    Categorizing Young Facebook Users Based On Their Differential Preference of Social Media Heuristics: A Q-Methodology Approach

    Get PDF
    Background: Social media have become an integral part of our modern society by providing platforms for users to create and exchange news, ideas, and information. The increasing use of social media has raised concerns about the reliability of the shared information, particularly information that is generated from anonymous users. Though prior studies have confirmed the important roles of heuristics and cues in the users’ evaluation of trustworthy information, there has been no research–to our knowledge–that categorized Facebook users based on their approaches to evaluating information credibility. Method: We employed Q-methodology to extract insights from 55 young Vietnamese users and to categorize them into different groups based on the distinct sets of heuristics that they used to evaluate the trustworthiness of online information on Facebook. Results: We identified four distinct types of young Facebook user groups that emerged based on their evaluation of online information trustworthiness. When evaluating online information trustworthiness on Facebook, these user groups assigned priorities differently to the characteristics of the online content, its original source, and the sharers or aggregators. We named these groups: (1) the balanced analyst, (2) the critical analyst, (3) the source analyst, and (4) the social network analyst. Conclusion: The findings offer insights that contribute to information processing literature. Moreover, marketing practitioners who aim to disseminate information effectively on social networks should take these user groups’ perspectives into consideration
    • …
    corecore