15 research outputs found
Stochastic gradient descent on Riemannian manifolds
Stochastic gradient descent is a simple approach to find the local minima of
a cost function whose evaluations are corrupted by noise. In this paper, we
develop a procedure extending stochastic gradient descent algorithms to the
case where the function is defined on a Riemannian manifold. We prove that, as
in the Euclidian case, the gradient descent algorithm converges to a critical
point of the cost function. The algorithm has numerous potential applications,
and is illustrated here by four examples. In particular a novel gossip
algorithm on the set of covariance matrices is derived and tested numerically.Comment: A slightly shorter version has been published in IEEE Transactions
Automatic Contro
Riemannian Natural Gradient Methods
This paper studies large-scale optimization problems on Riemannian manifolds
whose objective function is a finite sum of negative log-probability losses.
Such problems arise in various machine learning and signal processing
applications. By introducing the notion of Fisher information matrix in the
manifold setting, we propose a novel Riemannian natural gradient method, which
can be viewed as a natural extension of the natural gradient method from the
Euclidean setting to the manifold setting. We establish the almost-sure global
convergence of our proposed method under standard assumptions. Moreover, we
show that if the loss function satisfies certain convexity and smoothness
conditions and the input-output map satisfies a Riemannian Jacobian stability
condition, then our proposed method enjoys a local linear -- or, under the
Lipschitz continuity of the Riemannian Jacobian of the input-output map, even
quadratic -- rate of convergence. We then prove that the Riemannian Jacobian
stability condition will be satisfied by a two-layer fully connected neural
network with batch normalization with high probability, provided that the width
of the network is sufficiently large. This demonstrates the practical relevance
of our convergence rate result. Numerical experiments on applications arising
from machine learning demonstrate the advantages of the proposed method over
state-of-the-art ones
Decentralized projected Riemannian gradient method for smooth optimization on compact submanifolds
We consider the problem of decentralized nonconvex optimization over a
compact submanifold, where each local agent's objective function defined by the
local dataset is smooth. Leveraging the powerful tool of proximal smoothness,
we establish local linear convergence of the projected gradient descent method
with unit step size for solving the consensus problem over the compact
manifold. This serves as the basis for analyzing decentralized algorithms on
manifolds. Then, we propose two decentralized methods, namely the decentralized
projected Riemannian gradient descent (DPRGD) and the decentralized projected
Riemannian gradient tracking (DPRGT) methods. We establish their convergence
rates of and , respectively, to
reach a stationary point. To the best of our knowledge, DPRGT is the first
decentralized algorithm to achieve exact convergence for solving decentralized
optimization over a compact manifold. The key ingredients in the proof are the
Lipschitz-type inequalities of the projection operator on the compact manifold
and smooth functions on the manifold, which could be of independent interest.
Finally, we demonstrate the effectiveness of our proposed methods compared to
state-of-the-art ones through numerical experiments on eigenvalue problems and
low-rank matrix completion.Comment: 32 page
Distributed feedback-aided subspace concurrent opportunistic communications
© 2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes,creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.This paper deals with the distributed subspace agreement problem for opportunistic communications in time division duplex (TDD) distributed networks. Since scenario-adapted opportunistic transmission schemes rely on locally sampled observations from the wireless environment, degrees-of-freedom (DoF) sensed as available at any node may differ. Transmitting information without agreeing the common active subspace may incur in a performance loss due to noise enhancement, energy loss and inter-system interference. In this context, we propose two subspace concurrence schemes with and without side information about neighboring user's DoF.Peer ReviewedPostprint (published version