5,871 research outputs found
Considering Human Aspects on Strategies for Designing and Managing Distributed Human Computation
A human computation system can be viewed as a distributed system in which the
processors are humans, called workers. Such systems harness the cognitive power
of a group of workers connected to the Internet to execute relatively simple
tasks, whose solutions, once grouped, solve a problem that systems equipped
with only machines could not solve satisfactorily. Examples of such systems are
Amazon Mechanical Turk and the Zooniverse platform. A human computation
application comprises a group of tasks, each of them can be performed by one
worker. Tasks might have dependencies among each other. In this study, we
propose a theoretical framework to analyze such type of application from a
distributed systems point of view. Our framework is established on three
dimensions that represent different perspectives in which human computation
applications can be approached: quality-of-service requirements, design and
management strategies, and human aspects. By using this framework, we review
human computation in the perspective of programmers seeking to improve the
design of human computation applications and managers seeking to increase the
effectiveness of human computation infrastructures in running such
applications. In doing so, besides integrating and organizing what has been
done in this direction, we also put into perspective the fact that the human
aspects of the workers in such systems introduce new challenges in terms of,
for example, task assignment, dependency management, and fault prevention and
tolerance. We discuss how they are related to distributed systems and other
areas of knowledge.Comment: 3 figures, 1 tabl
On the Complexity of Mining Itemsets from the Crowd Using Taxonomies
We study the problem of frequent itemset mining in domains where data is not
recorded in a conventional database but only exists in human knowledge. We
provide examples of such scenarios, and present a crowdsourcing model for them.
The model uses the crowd as an oracle to find out whether an itemset is
frequent or not, and relies on a known taxonomy of the item domain to guide the
search for frequent itemsets. In the spirit of data mining with oracles, we
analyze the complexity of this problem in terms of (i) crowd complexity, that
measures the number of crowd questions required to identify the frequent
itemsets; and (ii) computational complexity, that measures the computational
effort required to choose the questions. We provide lower and upper complexity
bounds in terms of the size and structure of the input taxonomy, as well as the
size of a concise description of the output itemsets. We also provide
constructive algorithms that achieve the upper bounds, and consider more
efficient variants for practical situations.Comment: 18 pages, 2 figures. To be published to ICDT'13. Added missing
acknowledgemen
Outsourcing labour to the cloud
Various forms of open sourcing to the online population are establishing themselves as cheap, effective methods of getting work done. These have revolutionised the traditional methods for innovation and have contributed to the enrichment of the concept of 'open innovation'. To date, the literature concerning this emerging topic has been spread across a diverse number of media, disciplines and academic journals. This paper attempts for the first time to survey the emerging phenomenon of open outsourcing of work to the internet using 'cloud computing'. The paper describes the volunteer origins and recent commercialisation of this business service. It then surveys the current platforms, applications and academic literature. Based on this, a generic classification for crowdsourcing tasks and a number of performance metrics are proposed. After discussing strengths and limitations, the paper concludes with an agenda for academic research in this new area
An Abstract Formal Basis for Digital Crowds
Crowdsourcing, together with its related approaches, has become very popular
in recent years. All crowdsourcing processes involve the participation of a
digital crowd, a large number of people that access a single Internet platform
or shared service. In this paper we explore the possibility of applying formal
methods, typically used for the verification of software and hardware systems,
in analysing the behaviour of a digital crowd. More precisely, we provide a
formal description language for specifying digital crowds. We represent digital
crowds in which the agents do not directly communicate with each other. We
further show how this specification can provide the basis for sophisticated
formal methods, in particular formal verification.Comment: 32 pages, 4 figure
Incentive Mechanisms for Participatory Sensing: Survey and Research Challenges
Participatory sensing is a powerful paradigm which takes advantage of
smartphones to collect and analyze data beyond the scale of what was previously
possible. Given that participatory sensing systems rely completely on the
users' willingness to submit up-to-date and accurate information, it is
paramount to effectively incentivize users' active and reliable participation.
In this paper, we survey existing literature on incentive mechanisms for
participatory sensing systems. In particular, we present a taxonomy of existing
incentive mechanisms for participatory sensing systems, which are subsequently
discussed in depth by comparing and contrasting different approaches. Finally,
we discuss an agenda of open research challenges in incentivizing users in
participatory sensing.Comment: Updated version, 4/25/201
- âŠ