11,338 research outputs found
Learning to Approximate a Bregman Divergence
Bregman divergences generalize measures such as the squared Euclidean
distance and the KL divergence, and arise throughout many areas of machine
learning. In this paper, we focus on the problem of approximating an arbitrary
Bregman divergence from supervision, and we provide a well-principled approach
to analyzing such approximations. We develop a formulation and algorithm for
learning arbitrary Bregman divergences based on approximating their underlying
convex generating function via a piecewise linear function. We provide
theoretical approximation bounds using our parameterization and show that the
generalization error for metric learning using our framework
matches the known generalization error in the strictly less general Mahalanobis
metric learning setting. We further demonstrate empirically that our method
performs well in comparison to existing metric learning methods, particularly
for clustering and ranking problems.Comment: 19 pages, 4 figure
Hierarchical Metric Learning for Optical Remote Sensing Scene Categorization
We address the problem of scene classification from optical remote sensing
(RS) images based on the paradigm of hierarchical metric learning. Ideally,
supervised metric learning strategies learn a projection from a set of training
data points so as to minimize intra-class variance while maximizing inter-class
separability to the class label space. However, standard metric learning
techniques do not incorporate the class interaction information in learning the
transformation matrix, which is often considered to be a bottleneck while
dealing with fine-grained visual categories. As a remedy, we propose to
organize the classes in a hierarchical fashion by exploring their visual
similarities and subsequently learn separate distance metric transformations
for the classes present at the non-leaf nodes of the tree. We employ an
iterative max-margin clustering strategy to obtain the hierarchical
organization of the classes. Experiment results obtained on the large-scale
NWPU-RESISC45 and the popular UC-Merced datasets demonstrate the efficacy of
the proposed hierarchical metric learning based RS scene recognition strategy
in comparison to the standard approaches.Comment: Undergoing revision in GRS
RandomBoost: Simplified Multi-class Boosting through Randomization
We propose a novel boosting approach to multi-class classification problems,
in which multiple classes are distinguished by a set of random projection
matrices in essence. The approach uses random projections to alleviate the
proliferation of binary classifiers typically required to perform multi-class
classification. The result is a multi-class classifier with a single
vector-valued parameter, irrespective of the number of classes involved. Two
variants of this approach are proposed. The first method randomly projects the
original data into new spaces, while the second method randomly projects the
outputs of learned weak classifiers. These methods are not only conceptually
simple but also effective and easy to implement. A series of experiments on
synthetic, machine learning and visual recognition data sets demonstrate that
our proposed methods compare favorably to existing multi-class boosting
algorithms in terms of both the convergence rate and classification accuracy.Comment: 15 page
Socially Constrained Structural Learning for Groups Detection in Crowd
Modern crowd theories agree that collective behavior is the result of the
underlying interactions among small groups of individuals. In this work, we
propose a novel algorithm for detecting social groups in crowds by means of a
Correlation Clustering procedure on people trajectories. The affinity between
crowd members is learned through an online formulation of the Structural SVM
framework and a set of specifically designed features characterizing both their
physical and social identity, inspired by Proxemic theory, Granger causality,
DTW and Heat-maps. To adhere to sociological observations, we introduce a loss
function (G-MITRE) able to deal with the complexity of evaluating group
detection performances. We show our algorithm achieves state-of-the-art results
when relying on both ground truth trajectories and tracklets previously
extracted by available detector/tracker systems
Minimum Density Hyperplanes
Associating distinct groups of objects (clusters) with contiguous regions of
high probability density (high-density clusters), is central to many
statistical and machine learning approaches to the classification of unlabelled
data. We propose a novel hyperplane classifier for clustering and
semi-supervised classification which is motivated by this objective. The
proposed minimum density hyperplane minimises the integral of the empirical
probability density function along it, thereby avoiding intersection with high
density clusters. We show that the minimum density and the maximum margin
hyperplanes are asymptotically equivalent, thus linking this approach to
maximum margin clustering and semi-supervised support vector classifiers. We
propose a projection pursuit formulation of the associated optimisation problem
which allows us to find minimum density hyperplanes efficiently in practice,
and evaluate its performance on a range of benchmark datasets. The proposed
approach is found to be very competitive with state of the art methods for
clustering and semi-supervised classification
Large-Margin Determinantal Point Processes
Determinantal point processes (DPPs) offer a powerful approach to modeling
diversity in many applications where the goal is to select a diverse subset. We
study the problem of learning the parameters (the kernel matrix) of a DPP from
labeled training data. We make two contributions. First, we show how to
reparameterize a DPP's kernel matrix with multiple kernel functions, thus
enhancing modeling flexibility. Second, we propose a novel parameter estimation
technique based on the principle of large margin separation. In contrast to the
state-of-the-art method of maximum likelihood estimation, our large-margin loss
function explicitly models errors in selecting the target subsets, and it can
be customized to trade off different types of errors (precision vs. recall).
Extensive empirical studies validate our contributions, including applications
on challenging document and video summarization, where flexibility in modeling
the kernel matrix and balancing different errors is indispensable.Comment: 15 page
- …