2,755 research outputs found
Computation of Ground States of the Gross-Pitaevskii Functional via Riemannian Optimization
In this paper we combine concepts from Riemannian Optimization and the theory
of Sobolev gradients to derive a new conjugate gradient method for direct
minimization of the Gross-Pitaevskii energy functional with rotation. The
conservation of the number of particles constrains the minimizers to lie on a
manifold corresponding to the unit norm. The idea developed here is to
transform the original constrained optimization problem to an unconstrained
problem on this (spherical) Riemannian manifold, so that fast minimization
algorithms can be applied as alternatives to more standard constrained
formulations. First, we obtain Sobolev gradients using an equivalent definition
of an inner product which takes into account rotation. Then, the
Riemannian gradient (RG) steepest descent method is derived based on projected
gradients and retraction of an intermediate solution back to the constraint
manifold. Finally, we use the concept of the Riemannian vector transport to
propose a Riemannian conjugate gradient (RCG) method for this problem. It is
derived at the continuous level based on the "optimize-then-discretize"
paradigm instead of the usual "discretize-then-optimize" approach, as this
ensures robustness of the method when adaptive mesh refinement is performed in
computations. We evaluate various design choices inherent in the formulation of
the method and conclude with recommendations concerning selection of the best
options. Numerical tests demonstrate that the proposed RCG method outperforms
the simple gradient descent (RG) method in terms of rate of convergence. While
on simple problems a Newton-type method implemented in the {\tt Ipopt} library
exhibits a faster convergence than the (RCG) approach, the two methods perform
similarly on more complex problems requiring the use of mesh adaptation. At the
same time the (RCG) approach has far fewer tunable parameters.Comment: 28 pages, 13 figure
Manifold interpolation and model reduction
One approach to parametric and adaptive model reduction is via the
interpolation of orthogonal bases, subspaces or positive definite system
matrices. In all these cases, the sampled inputs stem from matrix sets that
feature a geometric structure and thus form so-called matrix manifolds. This
work will be featured as a chapter in the upcoming Handbook on Model Order
Reduction (P. Benner, S. Grivet-Talocia, A. Quarteroni, G. Rozza, W.H.A.
Schilders, L.M. Silveira, eds, to appear on DE GRUYTER) and reviews the
numerical treatment of the most important matrix manifolds that arise in the
context of model reduction. Moreover, the principal approaches to data
interpolation and Taylor-like extrapolation on matrix manifolds are outlined
and complemented by algorithms in pseudo-code.Comment: 37 pages, 4 figures, featured chapter of upcoming "Handbook on Model
Order Reduction
Riemannian consensus for manifolds with bounded curvature
Consensus algorithms are popular distributed algorithms for computing aggregate quantities, such as averages, in ad-hoc wireless networks. However, existing algorithms mostly address the case where the measurements lie in Euclidean space. In this work we propose Riemannian consensus, a natural extension of existing averaging consensus algorithms to the case of Riemannian manifolds. Unlike previous generalizations, our algorithm is intrinsic and, in principle, can be applied to any complete Riemannian manifold. We give sufficient convergence conditions on Riemannian manifolds with bounded curvature and we analyze the differences with respect to the Euclidean case. We test the proposed algorithms on synthetic data sampled from the space of rotations, the sphere and the Grassmann manifold.This work was supported by the grant NSF CNS-0834470. Recommended by Associate Editor L. Schenato. (CNS-0834470 - NSF
- …