2 research outputs found

    Trollslayer: Crowdsourcing and Characterization of Abusive Birds in Twitter

    Full text link
    As of today, abuse is a pressing issue to participants and administrators of Online Social Networks (OSN). Abuse in Twitter can spawn from arguments generated for influencing outcomes of a political election, the use of bots to automatically spread misinformation, and generally speaking, activities that deny, disrupt, degrade or deceive other participants and, or the network. Given the difficulty in finding and accessing a large enough sample of abuse ground truth from the Twitter platform, we built and deployed a custom crawler that we use to judiciously collect a new dataset from the Twitter platform with the aim of characterizing the nature of abusive users, a.k.a abusive birds, in the wild. We provide a comprehensive set of features based on users' attributes, as well as social-graph metadata. The former includes metadata about the account itself, while the latter is computed from the social graph among the sender and the receiver of each message. Attribute-based features are useful to characterize user's accounts in OSN, while graph-based features can reveal the dynamics of information dissemination across the network. In particular, we derive the Jaccard index as a key feature to reveal the benign or malicious nature of directed messages in Twitter. To the best of our knowledge, we are the first to propose such a similarity metric to characterize abuse in Twitter.Comment: SNAMS 201

    Discouraging Abusive Behavior in Privacy-Preserving Online Social Networking Applications

    Get PDF
    International audienceIn this position paper we present the challenge of detecting abuse in a modern Online Social Network (OSN) while balancing data utility and privacy, with the goal of limiting the amount of user sensitive information processed during data collection, extraction and analysis. While we are working with public domain data available in a contemporary OSN, our goal is to design a thorough method for future alternative OSN designs that both protect user's sensitive information and discourage abuse. In this summary, we present initial results for detecting abusive behavior on Twitter. We plan to further investigate the impact of reducing input metadata on the quality of the abuse detection. In addition, we will consider defeating Byzantine behavior by opponents in the system
    corecore