854 research outputs found
Sparse Coding on Symmetric Positive Definite Manifolds using Bregman Divergences
This paper introduces sparse coding and dictionary learning for Symmetric
Positive Definite (SPD) matrices, which are often used in machine learning,
computer vision and related areas. Unlike traditional sparse coding schemes
that work in vector spaces, in this paper we discuss how SPD matrices can be
described by sparse combination of dictionary atoms, where the atoms are also
SPD matrices. We propose to seek sparse coding by embedding the space of SPD
matrices into Hilbert spaces through two types of Bregman matrix divergences.
This not only leads to an efficient way of performing sparse coding, but also
an online and iterative scheme for dictionary learning. We apply the proposed
methods to several computer vision tasks where images are represented by region
covariance matrices. Our proposed algorithms outperform state-of-the-art
methods on a wide range of classification tasks, including face recognition,
action recognition, material classification and texture categorization
A Survey on Metric Learning for Feature Vectors and Structured Data
The need for appropriate ways to measure the distance or similarity between
data is ubiquitous in machine learning, pattern recognition and data mining,
but handcrafting such good metrics for specific problems is generally
difficult. This has led to the emergence of metric learning, which aims at
automatically learning a metric from data and has attracted a lot of interest
in machine learning and related fields for the past ten years. This survey
paper proposes a systematic review of the metric learning literature,
highlighting the pros and cons of each approach. We pay particular attention to
Mahalanobis distance metric learning, a well-studied and successful framework,
but additionally present a wide range of methods that have recently emerged as
powerful alternatives, including nonlinear metric learning, similarity learning
and local metric learning. Recent trends and extensions, such as
semi-supervised metric learning, metric learning for histogram data and the
derivation of generalization guarantees, are also covered. Finally, this survey
addresses metric learning for structured data, in particular edit distance
learning, and attempts to give an overview of the remaining challenges in
metric learning for the years to come.Comment: Technical report, 59 pages. Changes in v2: fixed typos and improved
presentation. Changes in v3: fixed typos. Changes in v4: fixed typos and new
method
A Riemannian Primal-dual Algorithm Based on Proximal Operator and its Application in Metric Learning
In this paper, we consider optimizing a smooth, convex, lower semicontinuous
function in Riemannian space with constraints. To solve the problem, we first
convert it to a dual problem and then propose a general primal-dual algorithm
to optimize the primal and dual variables iteratively. In each optimization
iteration, we employ a proximal operator to search optimal solution in the
primal space. We prove convergence of the proposed algorithm and show its
non-asymptotic convergence rate. By utilizing the proposed primal-dual
optimization technique, we propose a novel metric learning algorithm which
learns an optimal feature transformation matrix in the Riemannian space of
positive definite matrices. Preliminary experimental results on an optimal fund
selection problem in fund of funds (FOF) management for quantitative investment
showed its efficacy.Comment: 8 pages, 2 figures, published as a conference paper in 2019
International Joint Conference on Neural Networks (IJCNN
- …