11,764 research outputs found
Whom to Ask? Jury Selection for Decision Making Tasks on Micro-blog Services
It is universal to see people obtain knowledge on micro-blog services by
asking others decision making questions. In this paper, we study the Jury
Selection Problem(JSP) by utilizing crowdsourcing for decision making tasks on
micro-blog services. Specifically, the problem is to enroll a subset of crowd
under a limited budget, whose aggregated wisdom via Majority Voting scheme has
the lowest probability of drawing a wrong answer(Jury Error Rate-JER). Due to
various individual error-rates of the crowd, the calculation of JER is
non-trivial. Firstly, we explicitly state that JER is the probability when the
number of wrong jurors is larger than half of the size of a jury. To avoid the
exponentially increasing calculation of JER, we propose two efficient
algorithms and an effective bounding technique. Furthermore, we study the Jury
Selection Problem on two crowdsourcing models, one is for altruistic
users(AltrM) and the other is for incentive-requiring users(PayM) who require
extra payment when enrolled into a task. For the AltrM model, we prove the
monotonicity of JER on individual error rate and propose an efficient exact
algorithm for JSP. For the PayM model, we prove the NP-hardness of JSP on PayM
and propose an efficient greedy-based heuristic algorithm. Finally, we conduct
a series of experiments to investigate the traits of JSP, and validate the
efficiency and effectiveness of our proposed algorithms on both synthetic and
real micro-blog data.Comment: VLDB201
Optimum Statistical Estimation with Strategic Data Sources
We propose an optimum mechanism for providing monetary incentives to the data
sources of a statistical estimator such as linear regression, so that high
quality data is provided at low cost, in the sense that the sum of payments and
estimation error is minimized. The mechanism applies to a broad range of
estimators, including linear and polynomial regression, kernel regression, and,
under some additional assumptions, ridge regression. It also generalizes to
several objectives, including minimizing estimation error subject to budget
constraints. Besides our concrete results for regression problems, we
contribute a mechanism design framework through which to design and analyze
statistical estimators whose examples are supplied by workers with cost for
labeling said examples
Engineering Crowdsourced Stream Processing Systems
A crowdsourced stream processing system (CSP) is a system that incorporates
crowdsourced tasks in the processing of a data stream. This can be seen as
enabling crowdsourcing work to be applied on a sample of large-scale data at
high speed, or equivalently, enabling stream processing to employ human
intelligence. It also leads to a substantial expansion of the capabilities of
data processing systems. Engineering a CSP system requires the combination of
human and machine computation elements. From a general systems theory
perspective, this means taking into account inherited as well as emerging
properties from both these elements. In this paper, we position CSP systems
within a broader taxonomy, outline a series of design principles and evaluation
metrics, present an extensible framework for their design, and describe several
design patterns. We showcase the capabilities of CSP systems by performing a
case study that applies our proposed framework to the design and analysis of a
real system (AIDR) that classifies social media messages during time-critical
crisis events. Results show that compared to a pure stream processing system,
AIDR can achieve a higher data classification accuracy, while compared to a
pure crowdsourcing solution, the system makes better use of human workers by
requiring much less manual work effort
A-posteriori provenance-enabled linking of publications and datasets via crowdsourcing
This paper aims to share with the digital library community different opportunities to leverage crowdsourcing for a-posteriori capturing of dataset citation graphs. We describe a practical approach, which exploits one possible crowdsourcing technique to collect these graphs from domain experts and proposes their publication as Linked Data using the W3C PROV standard. Based on our findings from a study we ran during the USEWOD 2014 workshop, we propose a semi-automatic approach that generates metadata by leveraging information extraction as an additional step to crowdsourcing, to generate high-quality data citation graphs. Furthermore, we consider the design implications on our crowdsourcing approach when non-expert participants are involved in the process<br/
On Cognitive Preferences and the Plausibility of Rule-based Models
It is conventional wisdom in machine learning and data mining that logical
models such as rule sets are more interpretable than other models, and that
among such rule-based models, simpler models are more interpretable than more
complex ones. In this position paper, we question this latter assumption by
focusing on one particular aspect of interpretability, namely the plausibility
of models. Roughly speaking, we equate the plausibility of a model with the
likeliness that a user accepts it as an explanation for a prediction. In
particular, we argue that, all other things being equal, longer explanations
may be more convincing than shorter ones, and that the predominant bias for
shorter models, which is typically necessary for learning powerful
discriminative models, may not be suitable when it comes to user acceptance of
the learned models. To that end, we first recapitulate evidence for and against
this postulate, and then report the results of an evaluation in a
crowd-sourcing study based on about 3.000 judgments. The results do not reveal
a strong preference for simple rules, whereas we can observe a weak preference
for longer rules in some domains. We then relate these results to well-known
cognitive biases such as the conjunction fallacy, the representative heuristic,
or the recogition heuristic, and investigate their relation to rule length and
plausibility.Comment: V4: Another rewrite of section on interpretability to clarify focus
on plausibility and relation to interpretability, comprehensibility, and
justifiabilit
- …