445 research outputs found

    Low-rank updates and a divide-and-conquer method for linear matrix equations

    Get PDF
    Linear matrix equations, such as the Sylvester and Lyapunov equations, play an important role in various applications, including the stability analysis and dimensionality reduction of linear dynamical control systems and the solution of partial differential equations. In this work, we present and analyze a new algorithm, based on tensorized Krylov subspaces, for quickly updating the solution of such a matrix equation when its coefficients undergo low-rank changes. We demonstrate how our algorithm can be utilized to accelerate the Newton method for solving continuous-time algebraic Riccati equations. Our algorithm also forms the basis of a new divide-and-conquer approach for linear matrix equations with coefficients that feature hierarchical low-rank structure, such as HODLR, HSS, and banded matrices. Numerical experiments demonstrate the advantages of divide-and-conquer over existing approaches, in terms of computational time and memory consumption

    GMRES-Accelerated ADMM for Quadratic Objectives

    Full text link
    We consider the sequence acceleration problem for the alternating direction method-of-multipliers (ADMM) applied to a class of equality-constrained problems with strongly convex quadratic objectives, which frequently arise as the Newton subproblem of interior-point methods. Within this context, the ADMM update equations are linear, the iterates are confined within a Krylov subspace, and the General Minimum RESidual (GMRES) algorithm is optimal in its ability to accelerate convergence. The basic ADMM method solves a κ\kappa-conditioned problem in O(κ)O(\sqrt{\kappa}) iterations. We give theoretical justification and numerical evidence that the GMRES-accelerated variant consistently solves the same problem in O(κ1/4)O(\kappa^{1/4}) iterations for an order-of-magnitude reduction in iterations, despite a worst-case bound of O(κ)O(\sqrt{\kappa}) iterations. The method is shown to be competitive against standard preconditioned Krylov subspace methods for saddle-point problems. The method is embedded within SeDuMi, a popular open-source solver for conic optimization written in MATLAB, and used to solve many large-scale semidefinite programs with error that decreases like O(1/k2)O(1/k^{2}), instead of O(1/k)O(1/k), where kk is the iteration index.Comment: 31 pages, 7 figures. Accepted for publication in SIAM Journal on Optimization (SIOPT

    On the local convergence study for an efficient k-step iterative method

    Full text link
    [EN] This paper is devoted to a family of Newton-like methods with frozen derivatives used to approximate a locally unique solution of an equation. The methods have high order of convergence but only using first order derivatives. Moreover only one LU decomposition is required in each iteration. In particular, the methods are real alternatives to the classical Newton method. We present a local convergence analysis based on hypotheses only on the first derivative. These types of local results were usually proved based on hypotheses on the derivative of order higher than two although only the first derivative appears in these types of methods (Bermficlez et al., 2012; Petkovic et al., 2013; Traub, 1964). We apply these methods to an equation related to the nonlinear complementarity problem. Finally, we find the most efficient method in the family for this problem and we perform a theoretical and a numerical study for it. (C) 2018 Elsevier B.V. All rights reserved.Research was supported in part by Programa de Apoyo a Ia investigacion de Ia fundacion Seneca-Agencia de Ciencia y Tecnologia de la Region de Murcia 19374/PI/14, by the project of Generalitat Valenciana Prometeo/2016/089 and the projects MTM2015-64382-P (MINECO/FEDER), MTM2014-52016-C2-1-P and MTM2014-52016-C2-2-P of the Spanish Ministry of Science and Innovation.Amat, S.; Argyros, IK.; Busquier Saez, S.; Hernández-Verón, MA.; Martínez Molada, E. (2018). On the local convergence study for an efficient k-step iterative method. Journal of Computational and Applied Mathematics. 343:753-761. https://doi.org/10.1016/j.cam.2018.02.028S75376134

    Dynamic optimization for controller tuning with embedded safety and response quality measures

    Get PDF
    Controller tuning is needed to select the optimum response for the controlled process. This work presents a new tuning procedure of PID controllers with safety and response quality measures on a non-linear process model by optimization procedure, with a demonstration of two tanks in series. The model was developed to include safety constraints in the form of path constraints. The model was then solved with a new optimization solver, NLPOPT1, which uses a primal-dual interior point method with a novel non-monotone line search procedure with discretized penalty parameters. This procedure generated a grid of optimal PID tuning parameters for various switching of steady-states to be used as a predictor of PID tunings for arbitrary transitions. The interpolation of tuning parameters between the available parameters was found to be capable to produce state profiles with no violation on the safety measures, while maintaining the quality of the solution with the final set points targeted achievable

    Efficient Uncertainty Quantification with the Polynomial Chaos Method for Stiff Systems

    Get PDF
    The polynomial chaos method has been widely adopted as a computationally feasible approach for uncertainty quantification. Most studies to date have focused on non-stiff systems. When stiff systems are considered, implicit numerical integration requires the solution of a nonlinear system of equations at every time step. Using the Galerkin approach, the size of the system state increases from nn to S×nS \times n, where SS is the number of the polynomial chaos basis functions. Solving such systems with full linear algebra causes the computational cost to increase from O(n3)O(n^3) to O(S3n3)O(S^3n^3). The S3S^3-fold increase can make the computational cost prohibitive. This paper explores computationally efficient uncertainty quantification techniques for stiff systems using the Galerkin, collocation and collocation least-squares formulations of polynomial chaos. In the Galerkin approach, we propose a modification in the implicit time stepping process using an approximation of the Jacobian matrix to reduce the computational cost. The numerical results show a run time reduction with a small impact on accuracy. In the stochastic collocation formulation, we propose a least-squares approach based on collocation at a low-discrepancy set of points. Numerical experiments illustrate that the collocation least-squares approach for uncertainty quantification has similar accuracy with the Galerkin approach, is more efficient, and does not require any modifications of the original code

    Some Unconstrained Optimization Methods

    Get PDF
    Although it is a very old theme, unconstrained optimization is an area which is always actual for many scientists. Today, the results of unconstrained optimization are applied in different branches of science, as well as generally in practice. Here, we present the line search techniques. Further, in this chapter we consider some unconstrained optimization methods. We try to present these methods but also to present some contemporary results in this area
    corecore