12 research outputs found

    A High-Order Iterate Method for Computing A

    Get PDF

    Combined high-order algorithms in robust least-squares estimation with harmonic regressor and strictly diagonally dominant information matrix

    Get PDF
    This article describes new high-order algorithms in the least-squares problem with harmonic regressor and strictly diagonally dominant information matrix. Estimation accuracy and the number of steps to achieve this accuracy are controllable in these algorithms. Simplified forms of the high-order matrix inversion algorithms and the high-order algorithms of direct calculation of the parameter vector are found. The algorithms are presented as recursive procedures driven by estimation errors multiplied by the gain matrices, which can be seen as preconditioners. A simple and recursive (with respect to order) algorithm for update of the gain matrix, which is associated with Neumann series, is found. It is shown that the limiting form of the algorithm (algorithm of infinite order) provides perfect estimation. A new form of the gain matrix is also a basis for unification method of high-order algorithms. New combined and fast convergent high-order algorithms of recursive matrix inversion and algorithms of direct calculation of the parameter vector are presented. The stability of algorithms is proved and explicit transient bound on estimation error is calculated. New algorithms are simple, fast and robust with respect to round-off error accumulation

    Symmetric Stair Preconditioning of Linear Systems for Parallel Trajectory Optimization

    Full text link
    There has been a growing interest in parallel strategies for solving trajectory optimization problems. One key step in many algorithmic approaches to trajectory optimization is the solution of moderately-large and sparse linear systems. Iterative methods are particularly well-suited for parallel solves of such systems. However, fast and stable convergence of iterative methods is reliant on the application of a high-quality preconditioner that reduces the spread and increase the clustering of the eigenvalues of the target matrix. To improve the performance of these approaches, we present a new parallel-friendly symmetric stair preconditioner. We prove that our preconditioner has advantageous theoretical properties when used in conjunction with iterative methods for trajectory optimization such as a more clustered eigenvalue spectrum. Numerical experiments with typical trajectory optimization problems reveal that as compared to the best alternative parallel preconditioner from the literature, our symmetric stair preconditioner provides up to a 34% reduction in condition number and up to a 25% reduction in the number of resulting linear system solver iterations.Comment: Accepted to ICRA 2024, 8 pages, 3 figure

    A general class of arbitrary order iterative methods for computing generalized inverses

    Full text link
    [EN] A family of iterative schemes for approximating the inverse and generalized inverse of a complex matrix is designed, having arbitrary order of convergence p. For each p, a class of iterative schemes appears, for which we analyze those elements able to converge with very far initial estimations. This class generalizes many known iterative methods which are obtained for particular values of the parameters. The order of convergence is stated in each case, depending on the first non-zero parameter. For different examples, the accessibility of some schemes, that is, the set of initial estimations leading to convergence, is analyzed in order to select those with wider sets. This wideness is related with the value of the first non-zero value of the parameters defining the method. Later on, some numerical examples (academic and also from signal processing) are provided to confirm the theoretical results and to show the feasibility and effectiveness of the new methods. (C) 2021 The Authors. Published by Elsevier Inc.This research was supported in part by PGC2018-095896-B-C22 (MCIU/AEI/FEDER, UE) and in part by VIE from Instituto Tecnologico de Costa Rica (Research #1440037)Cordero Barbero, A.; Soto-Quiros, P.; Torregrosa Sánchez, JR. (2021). A general class of arbitrary order iterative methods for computing generalized inverses. Applied Mathematics and Computation. 409:1-18. https://doi.org/10.1016/j.amc.2021.126381S11840

    A universal matrix-free split preconditioner for the fixed-point iterative solution of non-symmetric linear systems

    Full text link
    We present an efficient preconditioner for linear problems Ax=yA x=y. It guarantees monotonic convergence of the memory-efficient fixed-point iteration for all accretive systems of the form A=L+VA = L + V, where LL is an approximation of AA, and the system is scaled so that the discrepancy is bounded with ∥V∥<1\lVert V \rVert<1. In contrast to common splitting preconditioners, our approach is not restricted to any particular splitting. Therefore, the approximate problem can be chosen so that an analytic solution is available to efficiently evaluate the preconditioner. We prove that the only preconditioner with this property has the form (L+I)(I−V)−1(L+I)(I - V)^{-1}. This unique form moreover permits the elimination of the forward problem from the preconditioned system, often halving the time required per iteration. We demonstrate and evaluate our approach for wave problems, diffusion problems, and pantograph delay differential equations. With the latter we show how the method extends to general, not necessarily accretive, linear systems.Comment: Rewritten version, includes efficiency comparison with shift preconditioner by Bai et al, which is shown to be a special cas

    A New High-Order Stable Numerical Method for Matrix Inversion

    Get PDF
    A stable numerical method is proposed for matrix inversion. The new method is accompanied by theoretical proof to illustrate twelfth-order convergence. A discussion of how to achieve the convergence using an appropriate initial value is presented. The application of the new scheme for finding Moore-Penrose inverse will also be pointed out analytically. The efficiency of the contributed iterative method is clarified on solving some numerical examples

    A Higher Order Iterative Method for Computing the Drazin Inverse

    Get PDF
    A method with high convergence rate for finding approximate inverses of nonsingular matrices is suggested and established analytically. An extension of the introduced computational scheme to general square matrices is defined. The extended method could be used for finding the Drazin inverse. The application of the scheme on large sparse test matrices alongside the use in preconditioning of linear system of equations will be presented to clarify the contribution of the paper
    corecore