4,827 research outputs found
Dissimilarity-based Ensembles for Multiple Instance Learning
In multiple instance learning, objects are sets (bags) of feature vectors
(instances) rather than individual feature vectors. In this paper we address
the problem of how these bags can best be represented. Two standard approaches
are to use (dis)similarities between bags and prototype bags, or between bags
and prototype instances. The first approach results in a relatively
low-dimensional representation determined by the number of training bags, while
the second approach results in a relatively high-dimensional representation,
determined by the total number of instances in the training set. In this paper
a third, intermediate approach is proposed, which links the two approaches and
combines their strengths. Our classifier is inspired by a random subspace
ensemble, and considers subspaces of the dissimilarity space, defined by
subsets of instances, as prototypes. We provide guidelines for using such an
ensemble, and show state-of-the-art performances on a range of multiple
instance learning problems.Comment: Submitted to IEEE Transactions on Neural Networks and Learning
Systems, Special Issue on Learning in Non-(geo)metric Space
Temporal Model Adaptation for Person Re-Identification
Person re-identification is an open and challenging problem in computer
vision. Majority of the efforts have been spent either to design the best
feature representation or to learn the optimal matching metric. Most approaches
have neglected the problem of adapting the selected features or the learned
model over time. To address such a problem, we propose a temporal model
adaptation scheme with human in the loop. We first introduce a
similarity-dissimilarity learning method which can be trained in an incremental
fashion by means of a stochastic alternating directions methods of multipliers
optimization procedure. Then, to achieve temporal adaptation with limited human
effort, we exploit a graph-based approach to present the user only the most
informative probe-gallery matches that should be used to update the model.
Results on three datasets have shown that our approach performs on par or even
better than state-of-the-art approaches while reducing the manual pairwise
labeling effort by about 80%
Supervised Classification: Quite a Brief Overview
The original problem of supervised classification considers the task of
automatically assigning objects to their respective classes on the basis of
numerical measurements derived from these objects. Classifiers are the tools
that implement the actual functional mapping from these measurements---also
called features or inputs---to the so-called class label---or output. The
fields of pattern recognition and machine learning study ways of constructing
such classifiers. The main idea behind supervised methods is that of learning
from examples: given a number of example input-output relations, to what extent
can the general mapping be learned that takes any new and unseen feature vector
to its correct class? This chapter provides a basic introduction to the
underlying ideas of how to come to a supervised classification problem. In
addition, it provides an overview of some specific classification techniques,
delves into the issues of object representation and classifier evaluation, and
(very) briefly covers some variations on the basic supervised classification
task that may also be of interest to the practitioner
Revisiting the Dissimilarity Representation in the Context of Regression
In machine learning, a natural way to represent an instance is by using a feature vector.
However, several studies have shown that this representation may not accurately characterize an object.
For classification problems, the dissimilarity paradigm has been proposed as an alternative to the standard
feature-based approach. Encoding each object by pairwise dissimilarities has been demonstrated to improve
the data quality because it mitigates some complexities such as class overlap, small disjuncts, and low-sample
size. However, its suitability and performance when applied to regression problems have not been fully
explored. This study redefines the dissimilarity representation for regression. To this end, we have carried
out an extensive experimental evaluation on 34 datasets using two linear regression models. The results show
that the dissimilarity approach decreases the error rates of both the traditional linear regression and the linear
model with elastic net regularization, and it also reduces the complexity of most regression datasets
Feature-based time-series analysis
This work presents an introduction to feature-based time-series analysis. The
time series as a data type is first described, along with an overview of the
interdisciplinary time-series analysis literature. I then summarize the range
of feature-based representations for time series that have been developed to
aid interpretable insights into time-series structure. Particular emphasis is
given to emerging research that facilitates wide comparison of feature-based
representations that allow us to understand the properties of a time-series
dataset that make it suited to a particular feature-based representation or
analysis algorithm. The future of time-series analysis is likely to embrace
approaches that exploit machine learning methods to partially automate human
learning to aid understanding of the complex dynamical patterns in the time
series we measure from the world.Comment: 28 pages, 9 figure
Mapping microarray gene expression data into dissimilarity spaces for tumor classification
Microarray gene expression data sets usually contain a large number of genes, but a small
number of samples. In this article, we present a two-stage classification model by combining
feature selection with the dissimilarity-based representation paradigm. In the preprocessing
stage, the ReliefF algorithm is used to generate a subset with a number of topranked
genes; in the learning/classification stage, the samples represented by the previously
selected genes are mapped into a dissimilarity space, which is then used to construct
a classifier capable of separating the classes more easily than a feature-based model. The
ultimate aim of this paper is not to find the best subset of genes, but to analyze the performance
of the dissimilarity-based models by means of a comprehensive collection of experiments
for the classification of microarray gene expression data. To this end, we compare
the classification results of an artificial neural network, a support vector machine and the
Fisher’s linear discriminant classifier built on the feature (gene) space with those on the
dissimilarity space when varying the number of genes selected by ReliefF, using eight different
microarray databases. The results show that the dissimilarity-based classifiers systematically
outperform the feature-based models. In addition, classification through the
proposed representation appears to be more robust (i.e. less sensitive to the number of
genes) than that with the conventional feature-based representation
- …