7,839 research outputs found
Dissimilarity-based Ensembles for Multiple Instance Learning
In multiple instance learning, objects are sets (bags) of feature vectors
(instances) rather than individual feature vectors. In this paper we address
the problem of how these bags can best be represented. Two standard approaches
are to use (dis)similarities between bags and prototype bags, or between bags
and prototype instances. The first approach results in a relatively
low-dimensional representation determined by the number of training bags, while
the second approach results in a relatively high-dimensional representation,
determined by the total number of instances in the training set. In this paper
a third, intermediate approach is proposed, which links the two approaches and
combines their strengths. Our classifier is inspired by a random subspace
ensemble, and considers subspaces of the dissimilarity space, defined by
subsets of instances, as prototypes. We provide guidelines for using such an
ensemble, and show state-of-the-art performances on a range of multiple
instance learning problems.Comment: Submitted to IEEE Transactions on Neural Networks and Learning
Systems, Special Issue on Learning in Non-(geo)metric Space
Constructing a Non-Negative Low Rank and Sparse Graph with Data-Adaptive Features
This paper aims at constructing a good graph for discovering intrinsic data
structures in a semi-supervised learning setting. Firstly, we propose to build
a non-negative low-rank and sparse (referred to as NNLRS) graph for the given
data representation. Specifically, the weights of edges in the graph are
obtained by seeking a nonnegative low-rank and sparse matrix that represents
each data sample as a linear combination of others. The so-obtained NNLRS-graph
can capture both the global mixture of subspaces structure (by the low
rankness) and the locally linear structure (by the sparseness) of the data,
hence is both generative and discriminative. Secondly, as good features are
extremely important for constructing a good graph, we propose to learn the data
embedding matrix and construct the graph jointly within one framework, which is
termed as NNLRS with embedded features (referred to as NNLRS-EF). Extensive
experiments on three publicly available datasets demonstrate that the proposed
method outperforms the state-of-the-art graph construction method by a large
margin for both semi-supervised classification and discriminative analysis,
which verifies the effectiveness of our proposed method
Distributed Low-rank Subspace Segmentation
Vision problems ranging from image clustering to motion segmentation to
semi-supervised learning can naturally be framed as subspace segmentation
problems, in which one aims to recover multiple low-dimensional subspaces from
noisy and corrupted input data. Low-Rank Representation (LRR), a convex
formulation of the subspace segmentation problem, is provably and empirically
accurate on small problems but does not scale to the massive sizes of modern
vision datasets. Moreover, past work aimed at scaling up low-rank matrix
factorization is not applicable to LRR given its non-decomposable constraints.
In this work, we propose a novel divide-and-conquer algorithm for large-scale
subspace segmentation that can cope with LRR's non-decomposable constraints and
maintains LRR's strong recovery guarantees. This has immediate implications for
the scalability of subspace segmentation, which we demonstrate on a benchmark
face recognition dataset and in simulations. We then introduce novel
applications of LRR-based subspace segmentation to large-scale semi-supervised
learning for multimedia event detection, concept detection, and image tagging.
In each case, we obtain state-of-the-art results and order-of-magnitude speed
ups
- …