6,500 research outputs found
Considering Human Aspects on Strategies for Designing and Managing Distributed Human Computation
A human computation system can be viewed as a distributed system in which the
processors are humans, called workers. Such systems harness the cognitive power
of a group of workers connected to the Internet to execute relatively simple
tasks, whose solutions, once grouped, solve a problem that systems equipped
with only machines could not solve satisfactorily. Examples of such systems are
Amazon Mechanical Turk and the Zooniverse platform. A human computation
application comprises a group of tasks, each of them can be performed by one
worker. Tasks might have dependencies among each other. In this study, we
propose a theoretical framework to analyze such type of application from a
distributed systems point of view. Our framework is established on three
dimensions that represent different perspectives in which human computation
applications can be approached: quality-of-service requirements, design and
management strategies, and human aspects. By using this framework, we review
human computation in the perspective of programmers seeking to improve the
design of human computation applications and managers seeking to increase the
effectiveness of human computation infrastructures in running such
applications. In doing so, besides integrating and organizing what has been
done in this direction, we also put into perspective the fact that the human
aspects of the workers in such systems introduce new challenges in terms of,
for example, task assignment, dependency management, and fault prevention and
tolerance. We discuss how they are related to distributed systems and other
areas of knowledge.Comment: 3 figures, 1 tabl
Human Computation and Convergence
Humans are the most effective integrators and producers of information,
directly and through the use of information-processing inventions. As these
inventions become increasingly sophisticated, the substantive role of humans in
processing information will tend toward capabilities that derive from our most
complex cognitive processes, e.g., abstraction, creativity, and applied world
knowledge. Through the advancement of human computation - methods that leverage
the respective strengths of humans and machines in distributed
information-processing systems - formerly discrete processes will combine
synergistically into increasingly integrated and complex information processing
systems. These new, collective systems will exhibit an unprecedented degree of
predictive accuracy in modeling physical and techno-social processes, and may
ultimately coalesce into a single unified predictive organism, with the
capacity to address societies most wicked problems and achieve planetary
homeostasis.Comment: Pre-publication draft of chapter. 24 pages, 3 figures; added
references to page 1 and 3, and corrected typ
Modeling crowdsourcing as collective problem solving
Crowdsourcing is a process of accumulating the ideas, thoughts or information
from many independent participants, with aim to find the best solution for a
given challenge. Modern information technologies allow for massive number of
subjects to be involved in a more or less spontaneous way. Still, the full
potentials of crowdsourcing are yet to be reached. We introduce a modeling
framework through which we study the effectiveness of crowdsourcing in relation
to the level of collectivism in facing the problem. Our findings reveal an
intricate relationship between the number of participants and the difficulty of
the problem, indicating the optimal size of the crowdsourced group. We discuss
our results in the context of modern utilization of crowdsourcing.Comment: 19 pages, 3 figure
On Cognitive Preferences and the Plausibility of Rule-based Models
It is conventional wisdom in machine learning and data mining that logical
models such as rule sets are more interpretable than other models, and that
among such rule-based models, simpler models are more interpretable than more
complex ones. In this position paper, we question this latter assumption by
focusing on one particular aspect of interpretability, namely the plausibility
of models. Roughly speaking, we equate the plausibility of a model with the
likeliness that a user accepts it as an explanation for a prediction. In
particular, we argue that, all other things being equal, longer explanations
may be more convincing than shorter ones, and that the predominant bias for
shorter models, which is typically necessary for learning powerful
discriminative models, may not be suitable when it comes to user acceptance of
the learned models. To that end, we first recapitulate evidence for and against
this postulate, and then report the results of an evaluation in a
crowd-sourcing study based on about 3.000 judgments. The results do not reveal
a strong preference for simple rules, whereas we can observe a weak preference
for longer rules in some domains. We then relate these results to well-known
cognitive biases such as the conjunction fallacy, the representative heuristic,
or the recogition heuristic, and investigate their relation to rule length and
plausibility.Comment: V4: Another rewrite of section on interpretability to clarify focus
on plausibility and relation to interpretability, comprehensibility, and
justifiabilit
A data-driven game theoretic strategy for developers in software crowdsourcing: a case study
Crowdsourcing has the advantages of being cost-effective and saving time, which is a typical embodiment of collective wisdom and community workers’ collaborative development. However, this development paradigm of software crowdsourcing has not been used widely. A very important reason is that requesters have limited knowledge about crowd workers’ professional skills and qualities. Another reason is that the crowd workers in the competition cannot get the appropriate reward, which affects their motivation. To solve this problem, this paper proposes a method of maximizing reward based on the crowdsourcing ability of workers, they can choose tasks according to their own abilities to obtain appropriate bonuses. Our method includes two steps: Firstly, it puts forward a method to evaluate the crowd workers’ ability, then it analyzes the intensity of competition for tasks at Topcoder.com—an open community crowdsourcing platform—on the basis of the workers’ crowdsourcing ability; secondly, it follows dynamic programming ideas and builds game models under complete information in different cases, offering a strategy of reward maximization for workers by solving a mixed-strategy Nash equilibrium. This paper employs crowdsourcing data from Topcoder.com to carry out experiments. The experimental results show that the distribution of workers’ crowdsourcing ability is uneven, and to some extent it can show the activity degree of crowdsourcing tasks. Meanwhile, according to the strategy of reward maximization, a crowd worker can get the theoretically maximum reward
- …