11 research outputs found

    Fairness and Transparency in Crowdsourcing

    Get PDF
    International audienceDespite the success of crowdsourcing, the question of ethics has not yet been addressed in its entirety. Existing efforts have studied fairness in worker compensation and in helping requesters detect malevolent workers. In this paper, we propose fairness axioms that generalize existing work and pave the way to studying fairness for task assignment, task completion, and worker compensation. Transparency on the other hand, has been addressed with the development of plug-ins and forums to track workers' performance and rate requesters. Similarly to fairness, we define transparency axioms and advocate the need to address it in a holistic manner by providing declarative specifications. We also discuss how fairness and transparency could be enforced and evaluated in a crowdsourcing platform

    SuperSQL

    No full text

    Towards a Gamified Equivalent Mutants Detection Platform

    No full text
    10th IEEE International Conference on Software Testing, Verification and Validation (ICST), Tokyo, Japan, 13-17 March 2017This poster presents a gamified system for equivalent mutants detection. This system can be used as a standalone tool for developers and testing teams alike - but we plan to use this system on a crowdsourcing platform to evaluate the various parameters involved in the detection of equivalent mutants, such as, expertise (coding and testing), familiarity with the code base, complexity of the code and tests, measured likelihood of equivalent mutants.Science Foundation Ireland2017-03-28 JG: Check DOI resolves to the paper -- not yet activated. If not, please delete

    Task Composition in Crowdsourcing

    No full text
    International audienceCrowdsourcing has gained popularity in a variety of domains as an increasing number of jobs are " taskified " and completed independently by a set of workers. A central process in crowdsourcing is the mechanism through which workers find tasks. On popular platforms such as Amazon Mechanical Turk, tasks can be sorted by dimensions such as creation date or reward amount. Research efforts on task assignment have focused on adopting a requester-centric approach whereby tasks are proposed to workers in order to maximize overall task throughput, result quality and cost. In this paper, we advocate the need to complement that with a worker-centric approach to task assignment, and examine the problem of producing, for each worker, a personalized summary of tasks that preserves overall task throughput. We formalize task composition for workers as an optimization problem that finds a representative set of k valid and relevant Composite Tasks (CTs). Validity enforces that a composite task complies with the task arrival rate and satisfies the worker's expected wage. Relevance imposes that tasks match the worker's qualifications. We show empirically that workers' experience is greatly improved due to task homogeneity in each CT and to the adequation of CTs with workers' skills. As a result task throughput is improved
    corecore