2 research outputs found

    Building a Task Blacklist for Online Social Systems

    Get PDF
    Hiding inside the mutually-beneficial model of online crowdsourcing are malicious campaigns, which target manipulating search results or leaving fake reviews on the web. Crowdsourced manipulation reduces the quality and trustworthiness of online social media, threatening the security of cyberspace as a whole. To mitigate this problem, we developed a classification model which filters out malicious campaigns from nearly 450,000 campaigns on popular crowdsourcing platforms. We then presented this blacklist on a website, where parties adversely affected by malicious campaigns, such as targeted websites owners, legitimate workers, owners of the crowdsourcing platforms, can use this website as a tool to identify and moderate potential malicious campaigns from the web

    Signed Latent Factors for Spamming Activity Detection

    Full text link
    Due to the increasing trend of performing spamming activities (e.g., Web spam, deceptive reviews, fake followers, etc.) on various online platforms to gain undeserved benefits, spam detection has emerged as a hot research issue. Previous attempts to combat spam mainly employ features related to metadata, user behaviors, or relational ties. These works have made considerable progress in understanding and filtering spamming campaigns. However, this problem remains far from fully solved. Almost all the proposed features focus on a limited number of observed attributes or explainable phenomena, making it difficult for existing methods to achieve further improvement. To broaden the vision about solving the spam problem and address long-standing challenges (class imbalance and graph incompleteness) in the spam detection area, we propose a new attempt of utilizing signed latent factors to filter fraudulent activities. The spam-contaminated relational datasets of multiple online applications in this scenario are interpreted by the unified signed network. Two competitive and highly dissimilar algorithms of latent factors mining (LFM) models are designed based on multi-relational likelihoods estimation (LFM-MRLE) and signed pairwise ranking (LFM-SPR), respectively. We then explore how to apply the mined latent factors to spam detection tasks. Experiments on real-world datasets of different kinds of Web applications (social media and Web forum) indicate that LFM models outperform state-of-the-art baselines in detecting spamming activities. By specifically manipulating experimental data, the effectiveness of our methods in dealing with incomplete and imbalanced challenges is valid
    corecore