1,026 research outputs found

    Efficient Rank Reduction of Correlation Matrices

    Get PDF
    Geometric optimisation algorithms are developed that efficiently find the nearest low-rank correlation matrix. We show, in numerical tests, that our methods compare favourably to the existing methods in the literature. The connection with the Lagrange multiplier method is established, along with an identification of whether a local minimum is a global minimum. An additional benefit of the geometric approach is that any weighted norm can be applied. The problem of finding the nearest low-rank correlation matrix occurs as part of the calibration of multi-factor interest rate market models to correlation.Comment: First version: 20 pages, 4 figures Second version [changed content]: 21 pages, 6 figure

    A dynamical view of nonlinear conjugate gradient methods with applications to FFT-based computational micromechanics

    Get PDF
    For fast Fourier transform (FFT)-based computational micromechanics, solvers need to be fast, memory-efficient, and independent of tedious parameter calibration. In this work, we investigate the benefits of nonlinear conjugate gradient (CG) methods in the context of FFT-based computational micromechanics. Traditionally, nonlinear CG methods require dedicated line-search procedures to be efficient, rendering them not competitive in the FFT-based context. We contribute to nonlinear CG methods devoid of line searches by exploiting similarities between nonlinear CG methods and accelerated gradient methods. More precisely, by letting the step-size go to zero, we exhibit the Fletcher–Reeves nonlinear CG as a dynamical system with state-dependent nonlinear damping. We show how to implement nonlinear CG methods for FFT-based computational micromechanics, and demonstrate by numerical experiments that the Fletcher–Reeves nonlinear CG represents a competitive, memory-efficient and parameter-choice free solution method for linear and nonlinear homogenization problems, which, in addition, decreases the residual monotonically

    Computation of Ground States of the Gross-Pitaevskii Functional via Riemannian Optimization

    Full text link
    In this paper we combine concepts from Riemannian Optimization and the theory of Sobolev gradients to derive a new conjugate gradient method for direct minimization of the Gross-Pitaevskii energy functional with rotation. The conservation of the number of particles constrains the minimizers to lie on a manifold corresponding to the unit L2L^2 norm. The idea developed here is to transform the original constrained optimization problem to an unconstrained problem on this (spherical) Riemannian manifold, so that fast minimization algorithms can be applied as alternatives to more standard constrained formulations. First, we obtain Sobolev gradients using an equivalent definition of an H1H^1 inner product which takes into account rotation. Then, the Riemannian gradient (RG) steepest descent method is derived based on projected gradients and retraction of an intermediate solution back to the constraint manifold. Finally, we use the concept of the Riemannian vector transport to propose a Riemannian conjugate gradient (RCG) method for this problem. It is derived at the continuous level based on the "optimize-then-discretize" paradigm instead of the usual "discretize-then-optimize" approach, as this ensures robustness of the method when adaptive mesh refinement is performed in computations. We evaluate various design choices inherent in the formulation of the method and conclude with recommendations concerning selection of the best options. Numerical tests demonstrate that the proposed RCG method outperforms the simple gradient descent (RG) method in terms of rate of convergence. While on simple problems a Newton-type method implemented in the {\tt Ipopt} library exhibits a faster convergence than the (RCG) approach, the two methods perform similarly on more complex problems requiring the use of mesh adaptation. At the same time the (RCG) approach has far fewer tunable parameters.Comment: 28 pages, 13 figure

    A randomised non-descent method for global optimisation

    Full text link
    This paper proposes novel algorithm for non-convex multimodal constrained optimisation problems. It is based on sequential solving restrictions of problem to sections of feasible set by random subspaces (in general, manifolds) of low dimensionality. This approach varies in a way to draw subspaces, dimensionality of subspaces, and method to solve restricted problems. We provide empirical study of algorithm on convex, unimodal and multimodal optimisation problems and compare it with efficient algorithms intended for each class of problems.Comment: 9 pages, 7 figure

    Decentralized Riemannian Conjugate Gradient Method on the Stiefel Manifold

    Full text link
    The conjugate gradient method is a crucial first-order optimization method that generally converges faster than the steepest descent method, and its computational cost is much lower than the second-order methods. However, while various types of conjugate gradient methods have been studied in Euclidean spaces and on Riemannian manifolds, there has little study for those in distributed scenarios. This paper proposes a decentralized Riemannian conjugate gradient descent (DRCGD) method that aims at minimizing a global function over the Stiefel manifold. The optimization problem is distributed among a network of agents, where each agent is associated with a local function, and communication between agents occurs over an undirected connected graph. Since the Stiefel manifold is a non-convex set, a global function is represented as a finite sum of possibly non-convex (but smooth) local functions. The proposed method is free from expensive Riemannian geometric operations such as retractions, exponential maps, and vector transports, thereby reducing the computational complexity required by each agent. To the best of our knowledge, DRCGD is the first decentralized Riemannian conjugate gradient algorithm to achieve global convergence over the Stiefel manifold

    A Test of Non-linear Conjugate Gradient Methods Via Exact Line Search

    Get PDF
    The conjugate gradient method provides a very powerful tool for solving unconstrained optimization problems. In this paper the non-linear conjugate gradient methods are tested using some benchmark non-polynomial unconstrained optimization functions. The task was accomplished by finding the exact values of the descent also known as the minimizing argument or rather the minimizer in each method. Findings also show that the basic requirement for exact convergence was satisfied by all the methods
    • …
    corecore