1,067 research outputs found

    Nonlinear CG-like iterative methods

    Get PDF
    AbstractA nonlinear conjugate gradient method has been introduced and analyzed by J.W. Daniel. This method applies to nonlinear operators with symmetric Jacobians. Orthomin(1) is an iterative method which applies to nonsymmetric and definite linear systems. In this article we generalize Orthomin(1) to a method which applies directly to nonlinear operator equations. Each iteration of the new method requires the solution of a scalar nonlinear equation. Under conditions that the Hessian is uniformly bounded away from zero and the Jacobian is uniformly positive definite the new method is proved to converge to a globally unique solution. Error bounds and local convergence results are also obtained. Numerical experiments on solving nonlinear operator equations arising in the discretization of nonlinear elliptic partial differential equations are presented

    Symmetric Stair Preconditioning of Linear Systems for Parallel Trajectory Optimization

    Full text link
    There has been a growing interest in parallel strategies for solving trajectory optimization problems. One key step in many algorithmic approaches to trajectory optimization is the solution of moderately-large and sparse linear systems. Iterative methods are particularly well-suited for parallel solves of such systems. However, fast and stable convergence of iterative methods is reliant on the application of a high-quality preconditioner that reduces the spread and increase the clustering of the eigenvalues of the target matrix. To improve the performance of these approaches, we present a new parallel-friendly symmetric stair preconditioner. We prove that our preconditioner has advantageous theoretical properties when used in conjunction with iterative methods for trajectory optimization such as a more clustered eigenvalue spectrum. Numerical experiments with typical trajectory optimization problems reveal that as compared to the best alternative parallel preconditioner from the literature, our symmetric stair preconditioner provides up to a 34% reduction in condition number and up to a 25% reduction in the number of resulting linear system solver iterations.Comment: Accepted to ICRA 2024, 8 pages, 3 figure

    A novel augmented graph approach for estimation in localisation and mapping

    Get PDF
    This thesis proposes the use of the augmented system form - a generalisation of the information form representing both observations and states. In conjunction with this, this thesis proposes a novel graph representation for the estimation problem together with a graph based linear direct solving algorithm. The augmented system form is a mathematical description of the estimation problem showing the states and observations. The augmented system form allows a more general range of factorisation orders among the observations and states, which is essential for constraints and is beneficial for sparsity and numerical reasons. The proposed graph structure is a novel sparse data structure providing more symmetric access and faster traversal and modification operations than the compressed-sparse-column (CSC) sparse matrix format. The graph structure was developed as a fundamental underlying structure for the formulation of sparse estimation problems. This graph-theoretic representation replaces conventional sparse matrix representations for the estimation states, observations and their interconnections. This thesis contributes a new implementation of the indefinite LDL factorisation algorithm based entirely in the graph structure. This direct solving algorithm was developed in order to exploit the above new approaches of this thesis. The factorisation operations consist of accessing adjacencies and modifying the graph edges. The developed solving algorithm demonstrates the significant differences in the form and approach of the graph-embedded algorithm compared to a conventional matrix implementation. The contributions proposed in this thesis improve estimation methods by providing novel mathematical data structures used to represent states, observations and the sparse links between them. These offer improved flexibility and capabilities which are exploited in the solving algorithm. The contributions constitute a new framework for the development of future online and incremental solving, data association and analysis algorithms for online, large scale localisation and mapping

    Generalized Descent Methods for Asymmetric Systems of Equations and Variational Inequalities

    Get PDF
    We consider generalizations of the steepest descent algorithm for solving asymmetric systems of equations. We first show that if the system is linear and is defined by a matrix M, then the method converges if M2 is positive definite. We also establish easy to verify conditions on the matrix M that ensure that M is positive definite, and develop a scaling procedure that extends the class of matrices that satisfy the convergence conditions. In addition, we establish a local convergence result for nonlinear systems defined by uniformly monotone maps, and discuss a class of general descent methods. Finally, we show that a variant of the Frank-Wolfe method will solve a certain class of variational inequality problems. All of the methods that we consider reduce to standard nonlinear programming algorithms for equivalent optimization problems when the Jacobian of the underlying problem map is symmetric. We interpret the convergence conditions for the generalized steepest descent algorithms as restricting the degree of asymmetry of the problem map
    corecore