2,742 research outputs found

    Disruption and Deception in Crowdsourcing: Towards a Crowdsourcing Risk Framework

    Get PDF
    While crowdsourcing has become increasingly popular among organizations, it also has become increasingly susceptible to unethical and malicious activities. This paper discusses recent examples of disruptive and deceptive efforts on crowdsourcing sites, which impacted the confidentiality, integrity, and availability of the crowdsourcing efforts’ service, stakeholders, and data. From these examples, we derive an organizing framework of risk types associated with disruption and deception in crowdsourcing based on commonalities among incidents. The framework includes prank activities, the intentional placement of false information, hacking attempts, DDoS attacks, botnet attacks, privacy violation attempts, and data breaches. Finally, we discuss example controls that can assist in identifying and mitigating disruption and deception risks in crowdsourcing

    Friendly Hackers to the Rescue: How Organizations Perceive Crowdsourced Vulnerability Discovery

    Get PDF
    Over the past years, crowdsourcing has increasingly been used for the discovery of vulnerabilities in software. While some organizations have extensively used crowdsourced vulnerability discovery, other organizations have been very hesitant in embracing this method. In this paper, we report the results of a qualitative study that reveals organizational concerns and fears in relation to crowdsourced vulnerability discovery. The study is based on 36 key informant interviews with various organizations. The study reveals a set of pre-adoption fears (i.e., lacking managerial expertise, low quality submissions, distrust in security professionals, cost escalation, lack of motivation of security professionals) as well as the post-adoption issues actually experienced. The study also identifies countermeasures that adopting organizations have used to mitigate fears and minimize issues. Implications for research and practice are discussed

    SafeguaRDP: an Architecture for Mediated Control of Desktop Applications by Untrusted Crowd Workers

    Get PDF
    The future of crowdsourcing depends on improving usability for the requesters who post jobs. Allowing workers to perform tasks directly on a requester’s computer could help, by obviating the need to adapt data into formats that a crowdsourcing platform can handle. However, granting remote access to a requester’s desktop would also pose the risk the workers might steal information or take malicious actions. This thesis presents SafeguaRDP, an architecture designed to enable future services in which a mediated and redacted desktop is shown to a remote worker. The redaction of the desktop does not modify any files on the desktop; it only changes what is shown to the worker. This thesis contributes the rationale behind the architecture as well as a threat analysis based on accepted software security principles. To place the SafeguaRDP architecture in context, a set of eight vignettes illustrate potential applications that would be possible with future services based on the SafeguaRDP architecture. With these applications, the transfer of digital labor may someday become as simple as the transfer of data today, all while preserving the privacy of the hiring party

    Speeching: Mobile Crowdsourced Speech Assessment to Support Self-Monitoring and Management for People with Parkinson's

    Get PDF
    We present Speeching, a mobile application that uses crowdsourcing to support the self-monitoring and management of speech and voice issues for people with Parkinson's (PwP). The application allows participants to audio record short voice tasks, which are then rated and assessed by crowd workers. Speeching then feeds these results back to provide users with examples of how they were perceived by listeners unconnected to them (thus not used to their speech patterns). We conducted our study in two phases. First we assessed the feasibility of utilising the crowd to provide ratings of speech and voice that are comparable to those of experts. We then conducted a trial to evaluate how the provision of feedback, using Speeching, was valued by PwP. Our study highlights how applications like Speeching open up new opportunities for self-monitoring in digital health and wellbeing, and provide a means for those without regular access to clinical assessment services to practice-and get meaningful feedback on-their speech

    Crowdcloud: Cloud of the Crowd

    Get PDF
    The ever increasing utilisation of crowdsourcing in various domains and its popularity as a method of accessing free or inexpensive labour, services, and innovation, and also as a method of providing fast solutions is observed as a good opportunity for both non-profit and for-profit organisations while it also appeals to members of the crowd. In particular, many cloud-based projects have benefited from crowdsourcing their needs for resources and they rely on the crowd and the resources they provide, either for free or for a nominal fee. However, current cloud platforms either provide services to the crowd or request services from them. Moreover, cloud services generally include a legally binding contract between the cloud service providers and cloud service clients. In this paper, the possible opportunities for applying crowdsourcing principles in the cloud in a new fashion are reviewed by proposing the idea of crowdcloud. Crowdcloud simply refers to the availability of cloud infrastructure, cloud platform, and cloud software services to the crowd by the crowd with or without a legally binding contract. This paper discusses the differences between crowdcloud and other similar notions already in existence. Then, a functional\ud architecture is proposed for crowdcloud and its constituents. Some of the advantages of crowdcloud, along with potential issues in crowdcloud and how to circumvent or minimise them are also reviewed and discussed

    Deep Neural Networks for Bot Detection

    Full text link
    The problem of detecting bots, automated social media accounts governed by software but disguising as human users, has strong implications. For example, bots have been used to sway political elections by distorting online discourse, to manipulate the stock market, or to push anti-vaccine conspiracy theories that caused health epidemics. Most techniques proposed to date detect bots at the account level, by processing large amount of social media posts, and leveraging information from network structure, temporal dynamics, sentiment analysis, etc. In this paper, we propose a deep neural network based on contextual long short-term memory (LSTM) architecture that exploits both content and metadata to detect bots at the tweet level: contextual features are extracted from user metadata and fed as auxiliary input to LSTM deep nets processing the tweet text. Another contribution that we make is proposing a technique based on synthetic minority oversampling to generate a large labeled dataset, suitable for deep nets training, from a minimal amount of labeled data (roughly 3,000 examples of sophisticated Twitter bots). We demonstrate that, from just one single tweet, our architecture can achieve high classification accuracy (AUC > 96%) in separating bots from humans. We apply the same architecture to account-level bot detection, achieving nearly perfect classification accuracy (AUC > 99%). Our system outperforms previous state of the art while leveraging a small and interpretable set of features yet requiring minimal training data

    Crowd and AI Powered Manipulation: Characterization and Detection

    Get PDF
    User reviews are ubiquitous. They power online review aggregators that influence our daily-based decisions, from what products to purchase (e.g., Amazon), movies to view (e.g., Netflix, HBO, Hulu), restaurants to patronize (e.g., Yelp), and hotels to book (e.g., TripAdvisor, Airbnb). In addition, policy makers rely on online commenting platforms like Regulations.gov and FCC.gov as a means for citizens to voice their opinions about public policy issues. However, showcasing the opinions of fellow users has a dark side as these reviews and comments are vulnerable to manipulation. And as advances in AI continue, fake reviews generated by AI agents rather than users pose even more scalable and dangerous manipulation attacks. These attacks on online discourse can sway ratings of products, manipulate opinions and perceived support of key issues, and degrade our trust in online platforms. Previous efforts have mainly focused on highly visible anomaly behaviors captured by statistical modeling or clustering algorithms. While detection of such anomalous behaviors helps to improve the reliability of online interactions, it misses subtle and difficult-to-detect behaviors. This research investigates two major research thrusts centered around manipulation strategies. In the first thrust, we study crowd-based manipulation strategies wherein crowds of paid workers organize to spread fake reviews. In the second thrust, we explore AI-based manipulation strategies, where crowd workers are replaced by scalable, and potentially undetectable generative models of fake reviews. In particular, one of the key aspects of this work is to address the research gap in previous efforts for anomaly detection where ground truth data is missing (and hence, evaluation can be challenging). In addition, this work studies the capabilities and impact of model-based attacks as the next generation of online threats. We propose inter-related methods for collecting evidence of these attacks, and create new countermeasures for defending against them. The performance of proposed methods are compared against other state-of-the-art approaches in the literature. We find that although crowd campaigns do not show obvious anomaly behavior, they can be detected given a careful formulation of their behaviors. And, although model-generated fake reviews may appear on the surface to be legitimate, we find that they do not completely mimic the underlying distribution of human-written reviews, so we can leverage this signal to detect them

    Software-based dialogue systems: Survey, taxonomy and challenges

    Get PDF
    The use of natural language interfaces in the field of human-computer interaction is undergoing intense study through dedicated scientific and industrial research. The latest contributions in the field, including deep learning approaches like recurrent neural networks, the potential of context-aware strategies and user-centred design approaches, have brought back the attention of the community to software-based dialogue systems, generally known as conversational agents or chatbots. Nonetheless, and given the novelty of the field, a generic, context-independent overview on the current state of research of conversational agents covering all research perspectives involved is missing. Motivated by this context, this paper reports a survey of the current state of research of conversational agents through a systematic literature review of secondary studies. The conducted research is designed to develop an exhaustive perspective through a clear presentation of the aggregated knowledge published by recent literature within a variety of domains, research focuses and contexts. As a result, this research proposes a holistic taxonomy of the different dimensions involved in the conversational agents’ field, which is expected to help researchers and to lay the groundwork for future research in the field of natural language interfaces.With the support from the Secretariat for Universities and Research of the Ministry of Business and Knowledge of the Government of Catalonia and the European Social Fund. The corresponding author gratefully acknowledges the Universitat Politècnica de Catalunya and Banco Santander for the inancial support of his predoctoral grant FPI-UPC. This paper has been funded by the Spanish Ministerio de Ciencia e Innovación under project / funding scheme PID2020-117191RB-I00 / AEI/10.13039/501100011033.Peer ReviewedPostprint (author's final draft
    corecore