1,639 research outputs found

    Capturing the Forest or the Trees: Designing for Granularity in Data Crowdsourcing

    Get PDF
    Crowdsourcing is a method of completing a task by engaging a large group of heterogeneous contributors. Data crowdsourcing is crowdsourcing of data collection. In this paper, we demonstrate how data crowdsourcing projects can be differentiated along five dimensions: (1) the extent to which tasks are well-defined; (2) the duration of the task; (3) the type of value generated by the consumers of crowdsourcing data; (4) the variety of contribution allowed when completing the task; and (5) the relative value of each contribution. We argue that the quality of information created by a crowd depends on the granularity of contributions contributors are able to make. Finally, we propose a set of principles for designing crowdsourcing system to align the level of granularity of contributions with project objectives

    Structuring Time through Participation in Micro-task Crowdsourcing: A Time Allocation Perspective

    Get PDF
    Small payments in micro-task crowdsourcing markets appear unreasonable compared with remunerations for regular work in the workplace, yet hundreds of thousands of micro-tasks are completed each day, and frequently by highly educated individuals. To explain this perplexing anomaly, we investigate individuals’ continuous participation in micro-task crowdsourcing from a time allocation perspective. Drawing upon the theory of the allocation of time, relative advantage over alternative activities and reservation wage of micro-task crowdsourcing affect intent to continue and expected wage respectively, which in turn have effects on intent to increase participation level. Based on previous research on time structure, we propose time structure as another indicator of continuous participation in micro-task crowdsourcing. More importantly, the negative moderating effect of time is conjectured as a salient driver of continuous participation in micro-task crowdsourcing. IT-enabled time structuring thus helps individuals fill dead time with micro-tasking online in spite of low payments

    Crowdsourced network measurements: Benefits and best practices

    Get PDF
    Network measurements are of high importance both for the operation of networks and for the design and evaluation of new management mechanisms. Therefore, several approaches exist for running network measurements, ranging from analyzing live traffic traces from campus or Internet Service Provider (ISP) networks to performing active measurements on distributed testbeds, e.g., PlanetLab, or involving volunteers. However, each method falls short, offering only a partial view of the network. For instance, the scope of passive traffic traces is limited to an ISP’s network and customers’ habits, whereas active measurements might be biased by the population or node location involved. To complement these techniques, we propose to use (commercial) crowdsourcing platforms for network measurements. They permit a controllable, diverse and realistic view of the Internet and provide better control than do measurements with voluntary participants. In this study, we compare crowdsourcing with traditional measurement techniques, describe possible pitfalls and limitations, and present best practices to overcome these issues. The contribution of this paper is a guideline for researchers to understand when and how to exploit crowdsourcing for network measurements

    How well did I do? The effect of feedback on affective commitment in the context of microwork

    Get PDF
    Crowdwork is a relatively new form of platform-mediated and paid online work that creates different types of relationships between all parties involved. This paper focuses on the crowdworker-requester relationship and investigates how the option of receiving feedback impacts the affective commitment of microworkers. An online vignette experiment (N= 145) on a German crowdworking platform was conducted. We found that the integration of feedback options within the task description influences the affective commitment positively toward the requester as well as the perceived requester attractiveness

    Considering Human Aspects on Strategies for Designing and Managing Distributed Human Computation

    Full text link
    A human computation system can be viewed as a distributed system in which the processors are humans, called workers. Such systems harness the cognitive power of a group of workers connected to the Internet to execute relatively simple tasks, whose solutions, once grouped, solve a problem that systems equipped with only machines could not solve satisfactorily. Examples of such systems are Amazon Mechanical Turk and the Zooniverse platform. A human computation application comprises a group of tasks, each of them can be performed by one worker. Tasks might have dependencies among each other. In this study, we propose a theoretical framework to analyze such type of application from a distributed systems point of view. Our framework is established on three dimensions that represent different perspectives in which human computation applications can be approached: quality-of-service requirements, design and management strategies, and human aspects. By using this framework, we review human computation in the perspective of programmers seeking to improve the design of human computation applications and managers seeking to increase the effectiveness of human computation infrastructures in running such applications. In doing so, besides integrating and organizing what has been done in this direction, we also put into perspective the fact that the human aspects of the workers in such systems introduce new challenges in terms of, for example, task assignment, dependency management, and fault prevention and tolerance. We discuss how they are related to distributed systems and other areas of knowledge.Comment: 3 figures, 1 tabl

    Game Theory Based Privacy Protection for Context-Aware Services

    Get PDF
    In the era of context-aware services, users are enjoying remarkable services based on data collected from a multitude of users. To receive services, they are at risk of leaking private information from adversaries possibly eavesdropping on the data and/or the un--trusted service platform selling off its data. Malicious adversaries may use leaked information to violate users\u27 privacy in unpredictable ways. To protect users\u27 privacy, many algorithms are proposed to protect users\u27 sensitive information by adding noise, thus causing context-aware service quality loss. Game theory has been utilized as a powerful tool to balance the tradeoff between privacy protection level and service quality. However, most of the existing schemes fail to depict the mutual relationship between any two parties involved: user, platform, and adversary. There is also an oversight to formulate the interaction occurring between multiple users, as well as the interaction between any two attributes. To solve these issues, this dissertation firstly proposes a three-party game framework to formulate the mutual interaction between three parties and study the optimal privacy protection level for context-aware services, thus optimize the service quality. Next, this dissertation extends the framework to a multi-user scenario and proposes a two-layer three-party game framework. This makes the proposed framework more realistic by further exploring the interaction, not only between different parties, but also between users. Finally, we focus on analyzing the impact of long-term time-serial data and the active actions of the platform and adversary. To achieve this objective, we design a three-party Stackelberg game model to help the user to decide whether to update information and the granularity of updated information

    The Analysis of Big Data on Cites and Regions - Some Computational and Statistical Challenges

    Get PDF
    Big Data on cities and regions bring new opportunities and challenges to data analysts and city planners. On the one side, they hold great promise to combine increasingly detailed data for each citizen with critical infrastructures to plan, govern and manage cities and regions, improve their sustainability, optimize processes and maximize the provision of public and private services. On the other side, the massive sample size and high-dimensionality of Big Data and their geo-temporal character introduce unique computational and statistical challenges. This chapter provides overviews on the salient characteristics of Big Data and how these features impact on paradigm change of data management and analysis, and also on the computing environment.Series: Working Papers in Regional Scienc

    On the Unique Features and Benefits of On-Demand Distribution Models

    Get PDF
    To close the gap between current distribution operations and today’s customer expectations, firms need to think differently about how resources are acquired, managed and allocated to fulfill customer requests. Rather than optimize planned resource capacity acquired through ownership or long- term partnerships, this work focuses on a specific supply-side innovation – on-demand distribution platforms. On-demand distribution systems move, store, and fulfill goods by matching autonomous suppliers\u27 resources (warehouse space, fulfillment capacity, truck space, delivery services) to requests on-demand. On-demand warehousing systems can provide resource elasticity by allowing capacity decisions to be made at a finer granularity (at the pallet-level) and commitment (monthly versus yearly), than construct or lease options. However, such systems are inherently more complex than traditional systems, as well as have varying costs and operational structures (e.g., higher variable costs, but little or no fixed costs). New decision- supporting models are needed to capture these trade-offs
    • 

    corecore