8,625 research outputs found
Time-Sensitive Bayesian Information Aggregation for Crowdsourcing Systems
Crowdsourcing systems commonly face the problem of aggregating multiple
judgments provided by potentially unreliable workers. In addition, several
aspects of the design of efficient crowdsourcing processes, such as defining
worker's bonuses, fair prices and time limits of the tasks, involve knowledge
of the likely duration of the task at hand. Bringing this together, in this
work we introduce a new time--sensitive Bayesian aggregation method that
simultaneously estimates a task's duration and obtains reliable aggregations of
crowdsourced judgments. Our method, called BCCTime, builds on the key insight
that the time taken by a worker to perform a task is an important indicator of
the likely quality of the produced judgment. To capture this, BCCTime uses
latent variables to represent the uncertainty about the workers' completion
time, the tasks' duration and the workers' accuracy. To relate the quality of a
judgment to the time a worker spends on a task, our model assumes that each
task is completed within a latent time window within which all workers with a
propensity to genuinely attempt the labelling task (i.e., no spammers) are
expected to submit their judgments. In contrast, workers with a lower
propensity to valid labeling, such as spammers, bots or lazy labelers, are
assumed to perform tasks considerably faster or slower than the time required
by normal workers. Specifically, we use efficient message-passing Bayesian
inference to learn approximate posterior probabilities of (i) the confusion
matrix of each worker, (ii) the propensity to valid labeling of each worker,
(iii) the unbiased duration of each task and (iv) the true label of each task.
Using two real-world public datasets for entity linking tasks, we show that
BCCTime produces up to 11% more accurate classifications and up to 100% more
informative estimates of a task's duration compared to state-of-the-art
methods
Integrating multiple criteria decision analysis in participatory forest planning
Forest planning in a participatory context often involves multiple stakeholders with conflicting interests. A promising approach for handling these complex situations is to integrate participatory planning and multiple criteria decision analysis (MCDA). The objective of this paper is to analyze strengths and weaknesses of such an integrated approach, focusing on how the use of MCDA has influenced the participatory process. The paper outlines a model for a participatory MCDA process with five steps: stakeholder analysis, structuring of the decision problem, generation of alternatives, elicitation of preferences, and ranking of alternatives. This model was applied in a case study of a planning process for the urban forest in Lycksele, Sweden. In interviews with stakeholders, criteria for four different social groups were identified. Stakeholders also identified specific areas important to them and explained what activities the areas were used for and the forest management they wished for there. Existing forest data were combined with information from interviews to create a map in which the urban forest was divided into zones of different management classes. Three alternative strategic forest plans were produced based on the zonal map. The stakeholders stated their preferences individually by the Analytic Hierarchy Process in inquiry forms and a ranking of alternatives and consistency ratios were determined for each stakeholder. Rankings of alternatives were aggregated; first, for each social group using the arithmetic mean, and then an overall aggregated ranking was calculated from the group rankings using the weighted arithmetic mean. The participatory MCDA process in Lycksele is assessed against five social goals: incorporating public values into decisions, improving the substantive quality of decisions, resolving conflict among competing interests, building trust in institutions, and educating and informing the public. The results and assessment of the case study support the integration of participatory planning and MCDA as a viable option for handling complex forest-management situations. Key issues related to the MCDA methodology that need to be explored further were identified: 1) The handling of place-specific criteria, 2) development of alternatives, 3) the aggregation of individual preferences into a common preference, and 4) application and evaluation of the integrated approach in real case studies
How about building a transport model of the world?
The paper provides a specification, created by the recently completed BLUEPRINT project, for a world transport network model. The model should be able to make predictions (up to 100 years into the future) of transport flows throughout the world and hence make predictions of global climate-changing emissions arising from transport. Furthermore, the model should: cover both passenger and freight traffic; feature all modes of transport (road, rail, non-motorised, water, air and pipeline); and represent both local traffic and long-distance traffic. The paper describes how the model will be structured as the combination of a global model (distinguishing between approximately 30 different geographic regions of the world) and a number of regional and sub-regional models. Wherever feasible, existing regional models will be used in this system, or at least simplified versions of such models. The overall modelling system should be owned jointly by an international network of world transport modellers, welcoming easy entry to other modellers who subscribe to the underlying spirit of the network. The paper recognises the scientific complexities associated with the uncertainties of predicting 100 years into the future and with difficulties arising from the likely differences in modelling philosophy between the (already existing) regional models that might be used in the modelling system. In order to tackle these complexities, the paper defines a number of philosophy of science reference points. At the core of these reference points is the distinction between objectivity and subjectivity. The paper finishes with a number of suggestions for next steps in building the model
Flow-based reputation: more than just ranking
The last years have seen a growing interest in collaborative systems like
electronic marketplaces and P2P file sharing systems where people are intended
to interact with other people. Those systems, however, are subject to security
and operational risks because of their open and distributed nature. Reputation
systems provide a mechanism to reduce such risks by building trust
relationships among entities and identifying malicious entities. A popular
reputation model is the so called flow-based model. Most existing reputation
systems based on such a model provide only a ranking, without absolute
reputation values; this makes it difficult to determine whether entities are
actually trustworthy or untrustworthy. In addition, those systems ignore a
significant part of the available information; as a consequence, reputation
values may not be accurate. In this paper, we present a flow-based reputation
metric that gives absolute values instead of merely a ranking. Our metric makes
use of all the available information. We study, both analytically and
numerically, the properties of the proposed metric and the effect of attacks on
reputation values
Considering Human Aspects on Strategies for Designing and Managing Distributed Human Computation
A human computation system can be viewed as a distributed system in which the
processors are humans, called workers. Such systems harness the cognitive power
of a group of workers connected to the Internet to execute relatively simple
tasks, whose solutions, once grouped, solve a problem that systems equipped
with only machines could not solve satisfactorily. Examples of such systems are
Amazon Mechanical Turk and the Zooniverse platform. A human computation
application comprises a group of tasks, each of them can be performed by one
worker. Tasks might have dependencies among each other. In this study, we
propose a theoretical framework to analyze such type of application from a
distributed systems point of view. Our framework is established on three
dimensions that represent different perspectives in which human computation
applications can be approached: quality-of-service requirements, design and
management strategies, and human aspects. By using this framework, we review
human computation in the perspective of programmers seeking to improve the
design of human computation applications and managers seeking to increase the
effectiveness of human computation infrastructures in running such
applications. In doing so, besides integrating and organizing what has been
done in this direction, we also put into perspective the fact that the human
aspects of the workers in such systems introduce new challenges in terms of,
for example, task assignment, dependency management, and fault prevention and
tolerance. We discuss how they are related to distributed systems and other
areas of knowledge.Comment: 3 figures, 1 tabl
- …