4,524 research outputs found

    Computation of Ground States of the Gross-Pitaevskii Functional via Riemannian Optimization

    Full text link
    In this paper we combine concepts from Riemannian Optimization and the theory of Sobolev gradients to derive a new conjugate gradient method for direct minimization of the Gross-Pitaevskii energy functional with rotation. The conservation of the number of particles constrains the minimizers to lie on a manifold corresponding to the unit L2L^2 norm. The idea developed here is to transform the original constrained optimization problem to an unconstrained problem on this (spherical) Riemannian manifold, so that fast minimization algorithms can be applied as alternatives to more standard constrained formulations. First, we obtain Sobolev gradients using an equivalent definition of an H1H^1 inner product which takes into account rotation. Then, the Riemannian gradient (RG) steepest descent method is derived based on projected gradients and retraction of an intermediate solution back to the constraint manifold. Finally, we use the concept of the Riemannian vector transport to propose a Riemannian conjugate gradient (RCG) method for this problem. It is derived at the continuous level based on the "optimize-then-discretize" paradigm instead of the usual "discretize-then-optimize" approach, as this ensures robustness of the method when adaptive mesh refinement is performed in computations. We evaluate various design choices inherent in the formulation of the method and conclude with recommendations concerning selection of the best options. Numerical tests demonstrate that the proposed RCG method outperforms the simple gradient descent (RG) method in terms of rate of convergence. While on simple problems a Newton-type method implemented in the {\tt Ipopt} library exhibits a faster convergence than the (RCG) approach, the two methods perform similarly on more complex problems requiring the use of mesh adaptation. At the same time the (RCG) approach has far fewer tunable parameters.Comment: 28 pages, 13 figure

    Effective Modified Hybrid Conjugate Gradient Method for Large-Scale Symmetric Nonlinear Equations

    Get PDF
    In this paper, we proposed hybrid conjugate gradient method using the convex combination of FR and PRP conjugate gradient methods for solving Large-scale symmetric nonlinear equations via Andrei approach with nonmonotone line search. Logical formula for obtaining the convex parameter using Newton and our proposed directions was also proposed. Under appropriate conditions global convergence was established. Reported numerical results show that the proposed method is very promising

    A Spectral Dai-Yuan-Type Conjugate Gradient Method for Unconstrained Optimization

    Get PDF
    A new spectral conjugate gradient method (SDYCG) is presented for solving unconstrained optimization problems in this paper. Our method provides a new expression of spectral parameter. This formula ensures that the sufficient descent condition holds. The search direction in the SDYCG can be viewed as a combination of the spectral gradient and the Dai-Yuan conjugate gradient. The global convergence of the SDYCG is also obtained. Numerical results show that the SDYCG may be capable of solving large-scale nonlinear unconstrained optimization problems

    A dai-liao hybrid hestenes-stiefel and fletcher-revees methods for unconstrained optimization

    Get PDF
    Some problems have no analytical solution or too difficult to solve by scientists, engineers, and mathematicians, so the development of numerical methods to obtain approximate solutions became necessary. Gradient methods are more efficient when the function to be minimized continuously in its first derivative. Therefore, this article presents a new hybrid Conjugate Gradient (CG) method to solve unconstrained optimization problems. The method requires the first-order derivatives but overcomes the steepest descent method’s shortcoming of slow convergence and needs not to save or compute the second-order derivatives needed by the Newton method. The CG update parameter is suggested from the Dai-Liao conjugacy condition as a convex combination of Hestenes-Stiefel and Fletcher-Revees algorithms by employing an optimal modulating choice parameterto avoid matrix storage. Numerical computation adopts an inexact line search to obtain the step-size that generates a decent property, showing that the algorithm is robust and efficient. The scheme converges globally under Wolfe line search, and it’s like is suitable in compressive sensing problems and M-tensor systems

    Modified memoryless spectral-scaling Broyden family on Riemannian manifolds

    Full text link
    This paper presents modified memoryless quasi-Newton methods based on the spectral-scaling Broyden family on Riemannian manifolds. The method involves adding one parameter to the search direction of the memoryless self-scaling Broyden family on the manifold. Moreover, it uses a general map instead of vector transport. This idea has already been proposed within a general framework of Riemannian conjugate gradient methods where one can use vector transport, scaled vector transport, or an inverse retraction. We show that the search direction satisfies the sufficient descent condition under some assumptions on the parameters. In addition, we show global convergence of the proposed method under the Wolfe conditions. We numerically compare it with existing methods, including Riemannian conjugate gradient methods and the memoryless spectral-scaling Broyden family. The numerical results indicate that the proposed method with the BFGS formula is suitable for solving an off-diagonal cost function minimization problem on an oblique manifold.Comment: 20 pages, 8 figure
    • …
    corecore