4,149 research outputs found
Efficient Clustering on Riemannian Manifolds: A Kernelised Random Projection Approach
Reformulating computer vision problems over Riemannian manifolds has
demonstrated superior performance in various computer vision applications. This
is because visual data often forms a special structure lying on a lower
dimensional space embedded in a higher dimensional space. However, since these
manifolds belong to non-Euclidean topological spaces, exploiting their
structures is computationally expensive, especially when one considers the
clustering analysis of massive amounts of data. To this end, we propose an
efficient framework to address the clustering problem on Riemannian manifolds.
This framework implements random projections for manifold points via kernel
space, which can preserve the geometric structure of the original space, but is
computationally efficient. Here, we introduce three methods that follow our
framework. We then validate our framework on several computer vision
applications by comparing against popular clustering methods on Riemannian
manifolds. Experimental results demonstrate that our framework maintains the
performance of the clustering whilst massively reducing computational
complexity by over two orders of magnitude in some cases
Statistically Motivated Second Order Pooling
Second-order pooling, a.k.a.~bilinear pooling, has proven effective for deep
learning based visual recognition. However, the resulting second-order networks
yield a final representation that is orders of magnitude larger than that of
standard, first-order ones, making them memory-intensive and cumbersome to
deploy. Here, we introduce a general, parametric compression strategy that can
produce more compact representations than existing compression techniques, yet
outperform both compressed and uncompressed second-order models. Our approach
is motivated by a statistical analysis of the network's activations, relying on
operations that lead to a Gaussian-distributed final representation, as
inherently used by first-order deep networks. As evidenced by our experiments,
this lets us outperform the state-of-the-art first-order and second-order
models on several benchmark recognition datasets.Comment: Accepted to ECCV 2018. Camera ready version. 14 page, 5 figures, 3
table
Beyond Gauss: Image-Set Matching on the Riemannian Manifold of PDFs
State-of-the-art image-set matching techniques typically implicitly model
each image-set with a Gaussian distribution. Here, we propose to go beyond
these representations and model image-sets as probability distribution
functions (PDFs) using kernel density estimators. To compare and match
image-sets, we exploit Csiszar f-divergences, which bear strong connections to
the geodesic distance defined on the space of PDFs, i.e., the statistical
manifold. Furthermore, we introduce valid positive definite kernels on the
statistical manifolds, which let us make use of more powerful classification
schemes to match image-sets. Finally, we introduce a supervised dimensionality
reduction technique that learns a latent space where f-divergences reflect the
class labels of the data. Our experiments on diverse problems, such as
video-based face recognition and dynamic texture classification, evidence the
benefits of our approach over the state-of-the-art image-set matching methods
A Survey on Metric Learning for Feature Vectors and Structured Data
The need for appropriate ways to measure the distance or similarity between
data is ubiquitous in machine learning, pattern recognition and data mining,
but handcrafting such good metrics for specific problems is generally
difficult. This has led to the emergence of metric learning, which aims at
automatically learning a metric from data and has attracted a lot of interest
in machine learning and related fields for the past ten years. This survey
paper proposes a systematic review of the metric learning literature,
highlighting the pros and cons of each approach. We pay particular attention to
Mahalanobis distance metric learning, a well-studied and successful framework,
but additionally present a wide range of methods that have recently emerged as
powerful alternatives, including nonlinear metric learning, similarity learning
and local metric learning. Recent trends and extensions, such as
semi-supervised metric learning, metric learning for histogram data and the
derivation of generalization guarantees, are also covered. Finally, this survey
addresses metric learning for structured data, in particular edit distance
learning, and attempts to give an overview of the remaining challenges in
metric learning for the years to come.Comment: Technical report, 59 pages. Changes in v2: fixed typos and improved
presentation. Changes in v3: fixed typos. Changes in v4: fixed typos and new
method
Orientation covariant aggregation of local descriptors with embeddings
Image search systems based on local descriptors typically achieve orientation
invariance by aligning the patches on their dominant orientations. Albeit
successful, this choice introduces too much invariance because it does not
guarantee that the patches are rotated consistently. This paper introduces an
aggregation strategy of local descriptors that achieves this covariance
property by jointly encoding the angle in the aggregation stage in a continuous
manner. It is combined with an efficient monomial embedding to provide a
codebook-free method to aggregate local descriptors into a single vector
representation. Our strategy is also compatible and employed with several
popular encoding methods, in particular bag-of-words, VLAD and the Fisher
vector. Our geometric-aware aggregation strategy is effective for image search,
as shown by experiments performed on standard benchmarks for image and
particular object retrieval, namely Holidays and Oxford buildings.Comment: European Conference on Computer Vision (2014
- …