9,948 research outputs found
Committee-Based Sample Selection for Probabilistic Classifiers
In many real-world learning tasks, it is expensive to acquire a sufficient
number of labeled examples for training. This paper investigates methods for
reducing annotation cost by `sample selection'. In this approach, during
training the learning program examines many unlabeled examples and selects for
labeling only those that are most informative at each stage. This avoids
redundantly labeling examples that contribute little new information. Our work
follows on previous research on Query By Committee, extending the
committee-based paradigm to the context of probabilistic classification. We
describe a family of empirical methods for committee-based sample selection in
probabilistic classification models, which evaluate the informativeness of an
example by measuring the degree of disagreement between several model variants.
These variants (the committee) are drawn randomly from a probability
distribution conditioned by the training set labeled so far. The method was
applied to the real-world natural language processing task of stochastic
part-of-speech tagging. We find that all variants of the method achieve a
significant reduction in annotation cost, although their computational
efficiency differs. In particular, the simplest variant, a two member committee
with no parameters to tune, gives excellent results. We also show that sample
selection yields a significant reduction in the size of the model used by the
tagger
Efficient Version-Space Reduction for Visual Tracking
Discrminative trackers, employ a classification approach to separate the
target from its background. To cope with variations of the target shape and
appearance, the classifier is updated online with different samples of the
target and the background. Sample selection, labeling and updating the
classifier is prone to various sources of errors that drift the tracker. We
introduce the use of an efficient version space shrinking strategy to reduce
the labeling errors and enhance its sampling strategy by measuring the
uncertainty of the tracker about the samples. The proposed tracker, utilize an
ensemble of classifiers that represents different hypotheses about the target,
diversify them using boosting to provide a larger and more consistent coverage
of the version-space and tune the classifiers' weights in voting. The proposed
system adjusts the model update rate by promoting the co-training of the
short-memory ensemble with a long-memory oracle. The proposed tracker
outperformed state-of-the-art trackers on different sequences bearing various
tracking challenges.Comment: CRV'17 Conferenc
Decentralized learning with budgeted network load using Gaussian copulas and classifier ensembles
We examine a network of learners which address the same classification task
but must learn from different data sets. The learners cannot share data but
instead share their models. Models are shared only one time so as to preserve
the network load. We introduce DELCO (standing for Decentralized Ensemble
Learning with COpulas), a new approach allowing to aggregate the predictions of
the classifiers trained by each learner. The proposed method aggregates the
base classifiers using a probabilistic model relying on Gaussian copulas.
Experiments on logistic regressor ensembles demonstrate competing accuracy and
increased robustness in case of dependent classifiers. A companion python
implementation can be downloaded at https://github.com/john-klein/DELC
Advances in Hyperspectral Image Classification: Earth monitoring with statistical learning methods
Hyperspectral images show similar statistical properties to natural grayscale
or color photographic images. However, the classification of hyperspectral
images is more challenging because of the very high dimensionality of the
pixels and the small number of labeled examples typically available for
learning. These peculiarities lead to particular signal processing problems,
mainly characterized by indetermination and complex manifolds. The framework of
statistical learning has gained popularity in the last decade. New methods have
been presented to account for the spatial homogeneity of images, to include
user's interaction via active learning, to take advantage of the manifold
structure with semisupervised learning, to extract and encode invariances, or
to adapt classifiers and image representations to unseen yet similar scenes.
This tutuorial reviews the main advances for hyperspectral remote sensing image
classification through illustrative examples.Comment: IEEE Signal Processing Magazine, 201
Active Collaborative Ensemble Tracking
A discriminative ensemble tracker employs multiple classifiers, each of which
casts a vote on all of the obtained samples. The votes are then aggregated in
an attempt to localize the target object. Such method relies on collective
competence and the diversity of the ensemble to approach the target/non-target
classification task from different views. However, by updating all of the
ensemble using a shared set of samples and their final labels, such diversity
is lost or reduced to the diversity provided by the underlying features or
internal classifiers' dynamics. Additionally, the classifiers do not exchange
information with each other while striving to serve the collective goal, i.e.,
better classification. In this study, we propose an active collaborative
information exchange scheme for ensemble tracking. This, not only orchestrates
different classifier towards a common goal but also provides an intelligent
update mechanism to keep the diversity of classifiers and to mitigate the
shortcomings of one with the others. The data exchange is optimized with regard
to an ensemble uncertainty utility function, and the ensemble is updated via
co-training. The evaluations demonstrate promising results realized by the
proposed algorithm for the real-world online tracking.Comment: AVSS 2017 Submissio
- …