3,727 research outputs found
From Task Classification Towards Similarity Measures for Recommendation in Crowdsourcing Systems
Task selection in micro-task markets can be supported by recommender systems
to help individuals to find appropriate tasks. Previous work showed that for
the selection process of a micro-task the semantic aspects, such as the
required action and the comprehensibility, are rated more important than
factual aspects, such as the payment or the required completion time. This work
gives a foundation to create such similarity measures. Therefore, we show that
an automatic classification based on task descriptions is possible.
Additionally, we propose similarity measures to cluster micro-tasks according
to semantic aspects.Comment: Work in Progress Paper at HCOMP 201
Considering Human Aspects on Strategies for Designing and Managing Distributed Human Computation
A human computation system can be viewed as a distributed system in which the
processors are humans, called workers. Such systems harness the cognitive power
of a group of workers connected to the Internet to execute relatively simple
tasks, whose solutions, once grouped, solve a problem that systems equipped
with only machines could not solve satisfactorily. Examples of such systems are
Amazon Mechanical Turk and the Zooniverse platform. A human computation
application comprises a group of tasks, each of them can be performed by one
worker. Tasks might have dependencies among each other. In this study, we
propose a theoretical framework to analyze such type of application from a
distributed systems point of view. Our framework is established on three
dimensions that represent different perspectives in which human computation
applications can be approached: quality-of-service requirements, design and
management strategies, and human aspects. By using this framework, we review
human computation in the perspective of programmers seeking to improve the
design of human computation applications and managers seeking to increase the
effectiveness of human computation infrastructures in running such
applications. In doing so, besides integrating and organizing what has been
done in this direction, we also put into perspective the fact that the human
aspects of the workers in such systems introduce new challenges in terms of,
for example, task assignment, dependency management, and fault prevention and
tolerance. We discuss how they are related to distributed systems and other
areas of knowledge.Comment: 3 figures, 1 tabl
Task Recommendation in Crowdsourcing Platforms
Task distribution platforms, such as micro-task markets, project assignment portals, and job search engines, support the assignment of tasks to workers.
Public crowdsourcing platforms support the assignment of tasks in micro-task markets to help task requesters to complete their tasks and allow workers to earn money.
Enterprise crowdsourcing platforms provide a marketplace within enterprises for the internal placement of tasks from employers to employees.
Most of both types of task distribution platforms rely on the workers' selection capabilities or provide simple filtering steps to reduce the number of tasks a worker can choose from.
This self-selection mechanism unfortunately allows for tasks to be performed by under- or over-qualified workers.
Supporting the workers by introducing a task recommender system helps to solve such deficits of existing task distributions.
In this thesis, the requirements towards task recommendation in task distribution platforms are gathered with a focus on the worker's perspective, the design of appropriate assignment strategies is described, and innovative methods to recommend tasks based on their textual descriptions are provided.
Different viewpoints are taken into account by analyzing the domains of micro-tasks, project assignments, and job postings.
The requirements of enterprise crowdsourcing platforms are compiled based on the literature and a qualitative study, providing a conceptual design of task assignment strategies.
The demands of workers and their perception of task similarity on public crowdsourcing platforms are identified, leading to the design and implementation of additional methods to determine the similarity of micro-tasks.
The textual descriptions of micro-tasks, projects, and job postings are analyzed in order to provide innovative methods for task recommendation in these domains
- …