13,190 research outputs found
Classifier Subset Selection to construct multi-classifiers by means of estimation of distribution algorithms
This paper proposes a novel approach to select the individual classifiers to take part in a Multiple-Classifier System. Individual classifier selection is a key step in the development of multi-classifiers. Several works have shown the benefits of fusing complementary classifiers. Nevertheless, the selection of the base classifiers to be used is still an open question, and different approaches have been proposed in the literature. This work is based on the selection of the appropriate single classifiers by means of an evolutionary algorithm. Different base classifiers, which have been chosen from different classifier families, are used as candidates in order to obtain variability in the classifications given. Experimental results carried out with 20 databases from the UCI Repository show how adequate the proposed approach is; Stacked Generalization multi-classifier has been selected to perform the experimental comparisons.The work described in this paper was partially conducted within the Basque Government Research Team grant and the University of the Basque Country UPV/EHU and under grant UFI11/45 (BAILab)
A review of associative classification mining
Associative classification mining is a promising approach in data mining that utilizes the
association rule discovery techniques to construct classification systems, also known as
associative classifiers. In the last few years, a number of associative classification algorithms
have been proposed, i.e. CPAR, CMAR, MCAR, MMAC and others. These algorithms
employ several different rule discovery, rule ranking, rule pruning, rule prediction and rule
evaluation methods. This paper focuses on surveying and comparing the state-of-the-art associative
classification techniques with regards to the above criteria. Finally, future directions in associative
classification, such as incremental learning and mining low-quality data sets, are also
highlighted in this paper
Asymmetric Pruning for Learning Cascade Detectors
Cascade classifiers are one of the most important contributions to real-time
object detection. Nonetheless, there are many challenging problems arising in
training cascade detectors. One common issue is that the node classifier is
trained with a symmetric classifier. Having a low misclassification error rate
does not guarantee an optimal node learning goal in cascade classifiers, i.e.,
an extremely high detection rate with a moderate false positive rate. In this
work, we present a new approach to train an effective node classifier in a
cascade detector. The algorithm is based on two key observations: 1) Redundant
weak classifiers can be safely discarded; 2) The final detector should satisfy
the asymmetric learning objective of the cascade architecture. To achieve this,
we separate the classifier training into two steps: finding a pool of
discriminative weak classifiers/features and training the final classifier by
pruning weak classifiers which contribute little to the asymmetric learning
criterion (asymmetric classifier construction). Our model reduction approach
helps accelerate the learning time while achieving the pre-determined learning
objective. Experimental results on both face and car data sets verify the
effectiveness of the proposed algorithm. On the FDDB face data sets, our
approach achieves the state-of-the-art performance, which demonstrates the
advantage of our approach.Comment: 14 page
Efficient Diverse Ensemble for Discriminative Co-Tracking
Ensemble discriminative tracking utilizes a committee of classifiers, to
label data samples, which are in turn, used for retraining the tracker to
localize the target using the collective knowledge of the committee. Committee
members could vary in their features, memory update schemes, or training data,
however, it is inevitable to have committee members that excessively agree
because of large overlaps in their version space. To remove this redundancy and
have an effective ensemble learning, it is critical for the committee to
include consistent hypotheses that differ from one-another, covering the
version space with minimum overlaps. In this study, we propose an online
ensemble tracker that directly generates a diverse committee by generating an
efficient set of artificial training. The artificial data is sampled from the
empirical distribution of the samples taken from both target and background,
whereas the process is governed by query-by-committee to shrink the overlap
between classifiers. The experimental results demonstrate that the proposed
scheme outperforms conventional ensemble trackers on public benchmarks.Comment: CVPR 2018 Submissio
A Review of Codebook Models in Patch-Based Visual Object Recognition
The codebook model-based approach, while ignoring any structural aspect in vision, nonetheless provides state-of-the-art performances on current datasets. The key role of a visual codebook is to provide a way to map the low-level features into a fixed-length vector in histogram space to which standard classifiers can be directly applied. The discriminative power of such a visual codebook determines the quality of the codebook model, whereas the size of the codebook controls the complexity of the model. Thus, the construction of a codebook is an important step which is usually done by cluster analysis. However, clustering is a process that retains regions of high density in a distribution and it follows that the resulting codebook need not have discriminant properties. This is also recognised as a computational bottleneck of such systems. In our recent work, we proposed a resource-allocating codebook, to constructing a discriminant codebook in a one-pass design procedure that slightly outperforms more traditional approaches at drastically reduced computing times. In this review we survey several approaches that have been proposed over the last decade with their use of feature detectors, descriptors, codebook construction schemes, choice of classifiers in recognising objects, and datasets that were used in evaluating the proposed methods
Efficient Version-Space Reduction for Visual Tracking
Discrminative trackers, employ a classification approach to separate the
target from its background. To cope with variations of the target shape and
appearance, the classifier is updated online with different samples of the
target and the background. Sample selection, labeling and updating the
classifier is prone to various sources of errors that drift the tracker. We
introduce the use of an efficient version space shrinking strategy to reduce
the labeling errors and enhance its sampling strategy by measuring the
uncertainty of the tracker about the samples. The proposed tracker, utilize an
ensemble of classifiers that represents different hypotheses about the target,
diversify them using boosting to provide a larger and more consistent coverage
of the version-space and tune the classifiers' weights in voting. The proposed
system adjusts the model update rate by promoting the co-training of the
short-memory ensemble with a long-memory oracle. The proposed tracker
outperformed state-of-the-art trackers on different sequences bearing various
tracking challenges.Comment: CRV'17 Conferenc
A review of domain adaptation without target labels
Domain adaptation has become a prominent problem setting in machine learning
and related fields. This review asks the question: how can a classifier learn
from a source domain and generalize to a target domain? We present a
categorization of approaches, divided into, what we refer to as, sample-based,
feature-based and inference-based methods. Sample-based methods focus on
weighting individual observations during training based on their importance to
the target domain. Feature-based methods revolve around on mapping, projecting
and representing features such that a source classifier performs well on the
target domain and inference-based methods incorporate adaptation into the
parameter estimation procedure, for instance through constraints on the
optimization procedure. Additionally, we review a number of conditions that
allow for formulating bounds on the cross-domain generalization error. Our
categorization highlights recurring ideas and raises questions important to
further research.Comment: 20 pages, 5 figure
- …