649 research outputs found
Riemannian Natural Gradient Methods
This paper studies large-scale optimization problems on Riemannian manifolds
whose objective function is a finite sum of negative log-probability losses.
Such problems arise in various machine learning and signal processing
applications. By introducing the notion of Fisher information matrix in the
manifold setting, we propose a novel Riemannian natural gradient method, which
can be viewed as a natural extension of the natural gradient method from the
Euclidean setting to the manifold setting. We establish the almost-sure global
convergence of our proposed method under standard assumptions. Moreover, we
show that if the loss function satisfies certain convexity and smoothness
conditions and the input-output map satisfies a Riemannian Jacobian stability
condition, then our proposed method enjoys a local linear -- or, under the
Lipschitz continuity of the Riemannian Jacobian of the input-output map, even
quadratic -- rate of convergence. We then prove that the Riemannian Jacobian
stability condition will be satisfied by a two-layer fully connected neural
network with batch normalization with high probability, provided that the width
of the network is sufficiently large. This demonstrates the practical relevance
of our convergence rate result. Numerical experiments on applications arising
from machine learning demonstrate the advantages of the proposed method over
state-of-the-art ones
Manifold Optimization Over the Set of Doubly Stochastic Matrices: A Second-Order Geometry
Convex optimization is a well-established research area with applications in
almost all fields. Over the decades, multiple approaches have been proposed to
solve convex programs. The development of interior-point methods allowed
solving a more general set of convex programs known as semi-definite programs
and second-order cone programs. However, it has been established that these
methods are excessively slow for high dimensions, i.e., they suffer from the
curse of dimensionality. On the other hand, optimization algorithms on manifold
have shown great ability in finding solutions to nonconvex problems in
reasonable time. This paper is interested in solving a subset of convex
optimization using a different approach. The main idea behind Riemannian
optimization is to view the constrained optimization problem as an
unconstrained one over a restricted search space. The paper introduces three
manifolds to solve convex programs under particular box constraints. The
manifolds, called the doubly stochastic, symmetric and the definite multinomial
manifolds, generalize the simplex also known as the multinomial manifold. The
proposed manifolds and algorithms are well-adapted to solving convex programs
in which the variable of interest is a multidimensional probability
distribution function. Theoretical analysis and simulation results testify the
efficiency of the proposed method over state of the art methods. In particular,
they reveal that the proposed framework outperforms conventional generic and
specialized solvers, especially in high dimensions
A Riemannian Primal-dual Algorithm Based on Proximal Operator and its Application in Metric Learning
In this paper, we consider optimizing a smooth, convex, lower semicontinuous
function in Riemannian space with constraints. To solve the problem, we first
convert it to a dual problem and then propose a general primal-dual algorithm
to optimize the primal and dual variables iteratively. In each optimization
iteration, we employ a proximal operator to search optimal solution in the
primal space. We prove convergence of the proposed algorithm and show its
non-asymptotic convergence rate. By utilizing the proposed primal-dual
optimization technique, we propose a novel metric learning algorithm which
learns an optimal feature transformation matrix in the Riemannian space of
positive definite matrices. Preliminary experimental results on an optimal fund
selection problem in fund of funds (FOF) management for quantitative investment
showed its efficacy.Comment: 8 pages, 2 figures, published as a conference paper in 2019
International Joint Conference on Neural Networks (IJCNN
International Conference on Continuous Optimization (ICCOPT) 2019 Conference Book
The Sixth International Conference on Continuous Optimization took place on the campus of the Technical University of Berlin, August 3-8, 2019. The ICCOPT is a flagship conference of the Mathematical Optimization Society (MOS), organized every three years. ICCOPT 2019 was hosted by the Weierstrass Institute for Applied Analysis and Stochastics (WIAS) Berlin. It included a Summer School and a Conference with a series of plenary and semi-plenary talks, organized and contributed sessions, and poster sessions.
This book comprises the full conference program. It contains, in particular, the scientific program in survey style as well as with all details, and information on the social program, the venue, special meetings, and more
- …