2,831 research outputs found
Computation of Ground States of the Gross-Pitaevskii Functional via Riemannian Optimization
In this paper we combine concepts from Riemannian Optimization and the theory
of Sobolev gradients to derive a new conjugate gradient method for direct
minimization of the Gross-Pitaevskii energy functional with rotation. The
conservation of the number of particles constrains the minimizers to lie on a
manifold corresponding to the unit norm. The idea developed here is to
transform the original constrained optimization problem to an unconstrained
problem on this (spherical) Riemannian manifold, so that fast minimization
algorithms can be applied as alternatives to more standard constrained
formulations. First, we obtain Sobolev gradients using an equivalent definition
of an inner product which takes into account rotation. Then, the
Riemannian gradient (RG) steepest descent method is derived based on projected
gradients and retraction of an intermediate solution back to the constraint
manifold. Finally, we use the concept of the Riemannian vector transport to
propose a Riemannian conjugate gradient (RCG) method for this problem. It is
derived at the continuous level based on the "optimize-then-discretize"
paradigm instead of the usual "discretize-then-optimize" approach, as this
ensures robustness of the method when adaptive mesh refinement is performed in
computations. We evaluate various design choices inherent in the formulation of
the method and conclude with recommendations concerning selection of the best
options. Numerical tests demonstrate that the proposed RCG method outperforms
the simple gradient descent (RG) method in terms of rate of convergence. While
on simple problems a Newton-type method implemented in the {\tt Ipopt} library
exhibits a faster convergence than the (RCG) approach, the two methods perform
similarly on more complex problems requiring the use of mesh adaptation. At the
same time the (RCG) approach has far fewer tunable parameters.Comment: 28 pages, 13 figure
A Spectral Dai-Yuan-Type Conjugate Gradient Method for Unconstrained Optimization
A new spectral conjugate gradient method (SDYCG) is presented for solving unconstrained optimization problems in this paper. Our method provides a new expression of spectral parameter. This formula ensures that the sufficient descent condition holds. The search direction in the SDYCG can be viewed as a combination of the spectral gradient and the Dai-Yuan conjugate gradient. The global convergence of the SDYCG is also obtained. Numerical results show that the SDYCG may be capable of solving large-scale nonlinear unconstrained optimization problems
Modified memoryless spectral-scaling Broyden family on Riemannian manifolds
This paper presents modified memoryless quasi-Newton methods based on the
spectral-scaling Broyden family on Riemannian manifolds. The method involves
adding one parameter to the search direction of the memoryless self-scaling
Broyden family on the manifold. Moreover, it uses a general map instead of
vector transport. This idea has already been proposed within a general
framework of Riemannian conjugate gradient methods where one can use vector
transport, scaled vector transport, or an inverse retraction. We show that the
search direction satisfies the sufficient descent condition under some
assumptions on the parameters. In addition, we show global convergence of the
proposed method under the Wolfe conditions. We numerically compare it with
existing methods, including Riemannian conjugate gradient methods and the
memoryless spectral-scaling Broyden family. The numerical results indicate that
the proposed method with the BFGS formula is suitable for solving an
off-diagonal cost function minimization problem on an oblique manifold.Comment: 20 pages, 8 figure
On Quasi-Newton Forward--Backward Splitting: Proximal Calculus and Convergence
We introduce a framework for quasi-Newton forward--backward splitting
algorithms (proximal quasi-Newton methods) with a metric induced by diagonal
rank- symmetric positive definite matrices. This special type of
metric allows for a highly efficient evaluation of the proximal mapping. The
key to this efficiency is a general proximal calculus in the new metric. By
using duality, formulas are derived that relate the proximal mapping in a
rank- modified metric to the original metric. We also describe efficient
implementations of the proximity calculation for a large class of functions;
the implementations exploit the piece-wise linear nature of the dual problem.
Then, we apply these results to acceleration of composite convex minimization
problems, which leads to elegant quasi-Newton methods for which we prove
convergence. The algorithm is tested on several numerical examples and compared
to a comprehensive list of alternatives in the literature. Our quasi-Newton
splitting algorithm with the prescribed metric compares favorably against
state-of-the-art. The algorithm has extensive applications including signal
processing, sparse recovery, machine learning and classification to name a few.Comment: arXiv admin note: text overlap with arXiv:1206.115
Exploiting spatial sparsity for multi-wavelength imaging in optical interferometry
Optical interferometers provide multiple wavelength measurements. In order to
fully exploit the spectral and spatial resolution of these instruments, new
algorithms for image reconstruction have to be developed. Early attempts to
deal with multi-chromatic interferometric data have consisted in recovering a
gray image of the object or independent monochromatic images in some spectral
bandwidths. The main challenge is now to recover the full 3-D (spatio-spectral)
brightness distribution of the astronomical target given all the available
data. We describe a new approach to implement multi-wavelength image
reconstruction in the case where the observed scene is a collection of
point-like sources. We show the gain in image quality (both spatially and
spectrally) achieved by globally taking into account all the data instead of
dealing with independent spectral slices. This is achieved thanks to a
regularization which favors spatial sparsity and spectral grouping of the
sources. Since the objective function is not differentiable, we had to develop
a specialized optimization algorithm which also accounts for non-negativity of
the brightness distribution.Comment: This version has been accepted for publication in J. Opt. Soc. Am.
Forward-backward truncated Newton methods for convex composite optimization
This paper proposes two proximal Newton-CG methods for convex nonsmooth
optimization problems in composite form. The algorithms are based on a a
reformulation of the original nonsmooth problem as the unconstrained
minimization of a continuously differentiable function, namely the
forward-backward envelope (FBE). The first algorithm is based on a standard
line search strategy, whereas the second one combines the global efficiency
estimates of the corresponding first-order methods, while achieving fast
asymptotic convergence rates. Furthermore, they are computationally attractive
since each Newton iteration requires the approximate solution of a linear
system of usually small dimension
Global Convergence of a Nonlinear Conjugate Gradient Method
A modified PRP nonlinear conjugate gradient method to solve unconstrained optimization problems is proposed. The important property of the proposed method is that the sufficient descent property is guaranteed independent of any line search. By the use of the Wolfe line search, the global convergence of the proposed method is established for nonconvex minimization. Numerical results show that the proposed method is effective and promising by comparing with the VPRP, CG-DESCENT, and DL+ methods
- …