204 research outputs found
Weakly-nonlocal Symplectic Structures, Whitham method, and weakly-nonlocal Symplectic Structures of Hydrodynamic Type
We consider the special type of the field-theoretical Symplectic structures
called weakly nonlocal. The structures of this type are in particular very
common for the integrable systems like KdV or NLS. We introduce here the
special class of the weakly nonlocal Symplectic structures which we call the
weakly nonlocal Symplectic structures of Hydrodynamic Type. We investigate then
the connection of such structures with the Whitham averaging method and propose
the procedure of "averaging" of the weakly nonlocal Symplectic structures. The
averaging procedure gives the weakly nonlocal Symplectic Structure of
Hydrodynamic Type for the corresponding Whitham system. The procedure gives
also the "action variables" corresponding to the wave numbers of -phase
solutions of initial system which give the additional conservation laws for the
Whitham system.Comment: 64 pages, Late
Riemannian stochastic approximation algorithms
We examine a wide class of stochastic approximation algorithms for solving
(stochastic) nonlinear problems on Riemannian manifolds. Such algorithms arise
naturally in the study of Riemannian optimization, game theory and optimal
transport, but their behavior is much less understood compared to the Euclidean
case because of the lack of a global linear structure on the manifold. We
overcome this difficulty by introducing a suitable Fermi coordinate frame which
allows us to map the asymptotic behavior of the Riemannian Robbins-Monro (RRM)
algorithms under study to that of an associated deterministic dynamical system.
In so doing, we provide a general template of almost sure convergence results
that mirrors and extends the existing theory for Euclidean Robbins-Monro
schemes, despite the significant complications that arise due to the curvature
and topology of the underlying manifold. We showcase the flexibility of the
proposed framework by applying it to a range of retraction-based variants of
the popular optimistic / extra-gradient methods for solving minimization
problems and games, and we provide a unified treatment for their convergence.Comment: 33 pages, 2 figures; a one-page abstract of this paper was presented
in COLT 202
MEANS AND AVERAGING ON RIEMANNIAN MANIFOLDS
Processing of manifold-valued data has received considerable attention in recent years. Standard data processing methods are not adequate for such data. Among many related data processing tasks finding means or averages of manifold-valued data is a basic and important one. Although means on Riemannian manifolds have a long history, there are still many unanswered theoretical questions about them, some of which we try to answer. We focus on two classes of means: the Riemannian mean and the recursive-iterative means. The Riemannian mean is defined as the solution(s) of a minimization problem, while the recursive-iterative means are defined based on the notion of Mean-Invariance (MI) in a recursive and iterative process. We give a new existence and uniqueness result for the Riemannian mean. The significant consequence is that it shows the local and global definitions of the Riemannian mean coincide under an uncompromised condition which guarantees the uniqueness of the local mean. We also study smoothness, isometry compatibility, convexity and noise sensitivity properties of the mean. In particular, we argue that positive sectional curvature of a manifold can cause high sensitivity to noise for the mean which might lead to a non-averaging behavior of that mean. We show that the mean on a manifold of positive curvature can have an averaging property in a weak sense. We introduce the notion of MI, and study a large class of recursive-iterative means. MI means are related to an interesting class of dynamical systems that can find Riemannian convex combinations. A special class of the MI means called pairwise mean, which through an iterative scheme called Perimeter Shrinkage is related to cyclic pursuit on manifolds, is also studied. Finally, we derive results specific to the special orthogonal group and the Grassmannian manifold, as these manifolds appear naturally in many applications. We distinguish the -norm Finsler balls of appropriate radius in these manifolds as domains for existence and uniqueness of the studied means. We also introduce some efficient numerical methods to perform the related calculations in the specified manifolds
The Role of Riemannian Manifolds in Computer Vision: From Coding to Deep Metric Learning
A diverse number of tasks in computer vision and machine learning
enjoy from representations of data that are compact yet
discriminative, informative and robust to critical measurements.
Two notable representations are offered by Region Covariance
Descriptors (RCovD) and linear subspaces which are naturally
analyzed through the manifold of Symmetric Positive Definite
(SPD) matrices and the Grassmann manifold, respectively, two
widely used types of Riemannian manifolds in computer vision.
As our first objective, we examine image and video-based
recognition applications where the local descriptors have the
aforementioned Riemannian structures, namely the SPD or linear
subspace structure. Initially, we provide a solution to compute
Riemannian version of the conventional Vector of Locally
aggregated Descriptors (VLAD), using geodesic distance of the
underlying manifold as the nearness measure. Next, by having a
closer look at the resulting codes, we formulate a new concept
which we name Local Difference Vectors (LDV). LDVs enable us to
elegantly expand our Riemannian coding techniques to any
arbitrary metric as well as provide intrinsic solutions to
Riemannian sparse coding and its variants when local structured
descriptors are considered.
We then turn our attention to two special types of covariance
descriptors namely infinite-dimensional RCovDs and rank-deficient
covariance matrices for which the underlying Riemannian
structure, i.e. the manifold of SPD matrices is out of reach to
great extent. %Generally speaking, infinite-dimensional RCovDs
offer better discriminatory power over their low-dimensional
counterparts.
To overcome this difficulty, we propose to approximate the
infinite-dimensional RCovDs by making use of two feature
mappings, namely random Fourier features and the Nystrom method.
As for the rank-deficient covariance matrices, unlike most
existing approaches that employ inference tools by predefined
regularizers, we derive positive definite kernels that can be
decomposed into the kernels on the cone of SPD matrices and
kernels on the Grassmann manifolds and show their effectiveness
for image set classification task.
Furthermore, inspired by attractive properties of Riemannian
optimization techniques, we extend the recently introduced Keep
It Simple and Straightforward MEtric learning (KISSME) method to
the scenarios where input data is non-linearly distributed. To
this end, we make use of the infinite dimensional covariance
matrices and propose techniques towards projecting on the
positive cone in a Reproducing Kernel Hilbert Space (RKHS).
We also address the sensitivity issue of the KISSME to the input
dimensionality. The KISSME algorithm is greatly dependent on
Principal Component Analysis (PCA) as a preprocessing step which
can lead to difficulties, especially when the dimensionality is
not meticulously set.
To address this issue, based on the KISSME algorithm, we develop
a Riemannian framework to jointly learn a mapping performing
dimensionality reduction and a metric in the induced space.
Lastly, in line with the recent trend in metric learning, we
devise end-to-end learning of a generic deep network for metric
learning using our derivation
Partial regularity for manifold constrained p(x)-harmonic maps
We prove that manifold constrained -harmonic maps are
-regular outside a set of zero -dimensional Lebesgue's measure,
for some . We also provide an estimate from above of the
Hausdorff dimension of the singular set
Inference and Model Parameter Learning for Image Labeling by Geometric Assignment
Image labeling is a fundamental problem in the area of low-level image analysis. In this work, we present novel approaches to maximum a posteriori (MAP) inference and model
parameter learning for image labeling, respectively. Both approaches are formulated in a smooth geometric setting, whose respective solution space is a simple Riemannian manifold. Optimization
consists of multiplicative updates that geometrically integrate the resulting Riemannian gradient flow.
Our novel approach to MAP inference is based on discrete graphical models. By utilizing local Wasserstein distances for coupling assignment measures across edges of the
underlying graph, we smoothly approximate a given discrete objective function and restrict it to the
assignment manifold. A corresponding update scheme combines geometric integration of the resulting gradient flow, and rounding to integral solutions that represent
valid labelings. This formulation constitutes an inner relaxation of the discrete labeling problem, i.e. throughout this process local marginalization constraints known from the established linear programming relaxation are satisfied.
Furthermore, we study the inverse problem of model parameter learning using the linear assignment flow and training data with ground truth. This is accomplished by a Riemannian gradient flow on the manifold of parameters that determine the regularization properties of the assignment flow. This smooth formulation enables us to tackle the model parameter learning problem from the perspective of parameter estimation of dynamical systems. By using symplectic partitioned Runge--Kutta methods for numerical integration, we show that deriving the sensitivity conditions of the parameter learning problem and its discretization commute. A favorable property of our approach is that learning is based on exact inference
- …