3 research outputs found

    Disruption and Deception in Crowdsourcing: Towards a Crowdsourcing Risk Framework

    Get PDF
    While crowdsourcing has become increasingly popular among organizations, it also has become increasingly susceptible to unethical and malicious activities. This paper discusses recent examples of disruptive and deceptive efforts on crowdsourcing sites, which impacted the confidentiality, integrity, and availability of the crowdsourcing efforts’ service, stakeholders, and data. From these examples, we derive an organizing framework of risk types associated with disruption and deception in crowdsourcing based on commonalities among incidents. The framework includes prank activities, the intentional placement of false information, hacking attempts, DDoS attacks, botnet attacks, privacy violation attempts, and data breaches. Finally, we discuss example controls that can assist in identifying and mitigating disruption and deception risks in crowdsourcing

    Video annotation by crowd workers with privacy-preserving local disclosure

    Get PDF
    Advancements in computer vision are still not reliable enough for detecting video content including humans and their actions. Microtask crowdsourcing on task markets such as Amazon Mechnical Turk and Upwork can bring humans into the loop. However, engaging crowd workers to annotate non-public video footage risks revealing the identities of people in the video who may have a right to anonymity. This thesis demonstrates how we can engage untrusted crowd workers to detect behaviors and objects, while robustly concealing the identities of all faces. We developed a web-based system that presents obfuscated videos to crowd workers, and provides them with a mechanism to test their hypotheses about what behaviors and/or objects might be present in the videos. Our system, called Fovea, works by initially applying a heavy median blur to the videos. This guarantees privacy but impedes recognition of other content of interest. An algorithm was developed as a part of this thesis to calculate the radius of a safe-to-reveal region around a pixel. It was implemented into an interactive system that allows workers watching the blurred videos to selectively reveal small regions by clicking. We compared two approaches for local disclosure of information–foveated mode and keyhole mode–together with a non-interactive blur-only mode as a control. The results showed that both modes led to superior recognition of actions while keeping the odds of correct face recognition close to that of the control

    Online Social Deception and Its Countermeasures for Trustworthy Cyberspace: A Survey

    Full text link
    We are living in an era when online communication over social network services (SNSs) have become an indispensable part of people's everyday lives. As a consequence, online social deception (OSD) in SNSs has emerged as a serious threat in cyberspace, particularly for users vulnerable to such cyberattacks. Cyber attackers have exploited the sophisticated features of SNSs to carry out harmful OSD activities, such as financial fraud, privacy threat, or sexual/labor exploitation. Therefore, it is critical to understand OSD and develop effective countermeasures against OSD for building a trustworthy SNSs. In this paper, we conducted an extensive survey, covering (i) the multidisciplinary concepts of social deception; (ii) types of OSD attacks and their unique characteristics compared to other social network attacks and cybercrimes; (iii) comprehensive defense mechanisms embracing prevention, detection, and response (or mitigation) against OSD attacks along with their pros and cons; (iv) datasets/metrics used for validation and verification; and (v) legal and ethical concerns related to OSD research. Based on this survey, we provide insights into the effectiveness of countermeasures and the lessons from existing literature. We conclude this survey paper with an in-depth discussions on the limitations of the state-of-the-art and recommend future research directions in this area.Comment: 35 pages, 8 figures, submitted to ACM Computing Survey
    corecore