110,869 research outputs found
Learning to Predict the Wisdom of Crowds
The problem of "approximating the crowd" is that of estimating the crowd's
majority opinion by querying only a subset of it. Algorithms that approximate
the crowd can intelligently stretch a limited budget for a crowdsourcing task.
We present an algorithm, "CrowdSense," that works in an online fashion to
dynamically sample subsets of labelers based on an exploration/exploitation
criterion. The algorithm produces a weighted combination of a subset of the
labelers' votes that approximates the crowd's opinion.Comment: Presented at Collective Intelligence conference, 2012
(arXiv:1204.2991
Sinkhorn Divergence of Topological Signature Estimates for Time Series Classification
Distinguishing between classes of time series sampled from dynamic systems is
a common challenge in systems and control engineering, for example in the
context of health monitoring, fault detection, and quality control. The
challenge is increased when no underlying model of a system is known,
measurement noise is present, and long signals need to be interpreted. In this
paper we address these issues with a new non parametric classifier based on
topological signatures. Our model learns classes as weighted kernel density
estimates (KDEs) over persistent homology diagrams and predicts new trajectory
labels using Sinkhorn divergences on the space of diagram KDEs to quantify
proximity. We show that this approach accurately discriminates between states
of chaotic systems that are close in parameter space, and its performance is
robust to noise.Comment: 9 pages, 4 figures, 2018 17th International Conference on Machine
Learning and Application
Decentralized learning with budgeted network load using Gaussian copulas and classifier ensembles
We examine a network of learners which address the same classification task
but must learn from different data sets. The learners cannot share data but
instead share their models. Models are shared only one time so as to preserve
the network load. We introduce DELCO (standing for Decentralized Ensemble
Learning with COpulas), a new approach allowing to aggregate the predictions of
the classifiers trained by each learner. The proposed method aggregates the
base classifiers using a probabilistic model relying on Gaussian copulas.
Experiments on logistic regressor ensembles demonstrate competing accuracy and
increased robustness in case of dependent classifiers. A companion python
implementation can be downloaded at https://github.com/john-klein/DELC
Modeling a Sensor to Improve its Efficacy
Robots rely on sensors to provide them with information about their
surroundings. However, high-quality sensors can be extremely expensive and
cost-prohibitive. Thus many robotic systems must make due with lower-quality
sensors. Here we demonstrate via a case study how modeling a sensor can improve
its efficacy when employed within a Bayesian inferential framework. As a test
bed we employ a robotic arm that is designed to autonomously take its own
measurements using an inexpensive LEGO light sensor to estimate the position
and radius of a white circle on a black field. The light sensor integrates the
light arriving from a spatially distributed region within its field of view
weighted by its Spatial Sensitivity Function (SSF). We demonstrate that by
incorporating an accurate model of the light sensor SSF into the likelihood
function of a Bayesian inference engine, an autonomous system can make improved
inferences about its surroundings. The method presented here is data-based,
fairly general, and made with plug-and play in mind so that it could be
implemented in similar problems.Comment: 18 pages, 8 figures, submitted to the special issue of "Sensors for
Robotics
- …