8 research outputs found

    Certification aspects of the fast gradient method for solving the dual of parametric convex programs

    Get PDF
    This paper examines the computational complexity certification of the fast gradient method for the solution of the dual of a parametric convex program. To this end, a lower iteration bound is derived such that for all parameters from a compact set a solution with a specified level of suboptimality will be obtained. For its practical importance, the derivation of the smallest lower iteration bound is considered. In order to determine it, we investigate both the computation of the worst case minimal Euclidean distance between an initial iterate and a Lagrange multiplier and the issue of finding the largest step size for the fast gradient method. In addition, we argue that optimal preconditioning of the dual problem cannot be proven to decrease the smallest lower iteration bound. The findings of this paper are of importance in embedded optimization, for instance, in model predictive contro

    Newton-type Alternating Minimization Algorithm for Convex Optimization

    Full text link
    We propose NAMA (Newton-type Alternating Minimization Algorithm) for solving structured nonsmooth convex optimization problems where the sum of two functions is to be minimized, one being strongly convex and the other composed with a linear mapping. The proposed algorithm is a line-search method over a continuous, real-valued, exact penalty function for the corresponding dual problem, which is computed by evaluating the augmented Lagrangian at the primal points obtained by alternating minimizations. As a consequence, NAMA relies on exactly the same computations as the classical alternating minimization algorithm (AMA), also known as the dual proximal gradient method. Under standard assumptions the proposed algorithm possesses strong convergence properties, while under mild additional assumptions the asymptotic convergence is superlinear, provided that the search directions are chosen according to quasi-Newton formulas. Due to its simplicity, the proposed method is well suited for embedded applications and large-scale problems. Experiments show that using limited-memory directions in NAMA greatly improves the convergence speed over AMA and its accelerated variant

    Credible Autocoding of Convex Optimization Algorithms

    Full text link
    The efficiency of modern optimization methods, coupled with increasing computational resources, has led to the possibility of real-time optimization algorithms acting in safety critical roles. There is a considerable body of mathematical proofs on on-line optimization programs which can be leveraged to assist in the development and verification of their implementation. In this paper, we demonstrate how theoretical proofs of real-time optimization algorithms can be used to describe functional properties at the level of the code, thereby making it accessible for the formal methods community. The running example used in this paper is a generic semi-definite programming (SDP) solver. Semi-definite programs can encode a wide variety of optimization problems and can be solved in polynomial time at a given accuracy. We describe a top-to-down approach that transforms a high-level analysis of the algorithm into useful code annotations. We formulate some general remarks about how such a task can be incorporated into a convex programming autocoder. We then take a first step towards the automatic verification of the optimization program by identifying key issues to be adressed in future work

    Complexity Certification of the Fast Alternating Minimization Algorithm for Linear Model Predictive Control

    Get PDF
    In this paper, the fast alternating minimization algorithm (FAMA) is proposed to solve model predictive control (MPC) problems with polytopic and second-order cone constraints. We extend previous theoretical results of FAMA to a more general case, where convex constraints are allowed to be imposed on the strongly convex objective and all convergence properties of FAMA are still preserved. Two splitting strategies for MPC problems are presented. Both of them satisfy the assumptions of FAMA and result in efficient implementations by reducing each iteration of FAMA to simple operations. We derive computational complexity certificates for both splitting strategies, by providing bounds on the number of iterations for both primal and dual variables, which are of particular relevance in the context of real-time MPC to bound the required online computation time. For MPC problems with polyhedral and ellipsoidal constraints, an off-line preconditioning method is presented to further improve the convergence speed of FAMA by reducing the complexity bound and enlarging the step-size of the algorithm. Finally, we demonstrate the performance of FAMA compared to other splitting methods using a quadrotor example

    Complexity Certification of the Fast Alternating Minimization Algorithm for Linear MPC

    Get PDF
    In this technical note, the fast alternating minimization algorithm (FAMA) is proposed to solve model predictive control (MPC) problems with polytopic and second-order cone constraints. Two splitting strategies with efficient implementations for MPC problems are presented. We derive computational complexity certificates for both splitting strategies, by providing complexity upper-bounds on the number of iterations required to provide a certain accuracy of the dual function value and, most importantly, of the primal solution. This is of particular relevance in the context of real-time MPC in order to bound the required on-line computation time. We further address the computation of the complexity bounds, requiring the solution of a non-convex minimization problem. Finally, we demonstrate the performance of FAMA compared to other splitting methods using a quadrotor example

    Certification Aspects of the Fast Gradient Method for Solving the Dual of Parametric Convex Programs

    No full text
    This paper examines the computational complexity certification of the fast gradient method for the solution of the dual of a parametric con- vex program. To this end, a lower iteration bound is derived such that for all parameters from a compact set a solution with a specified level of subop- timality will be obtained. For its practical importance, the derivation of the smallest lower iteration bound is considered. In order to determine it, we in- vestigate both the computation of the worst case minimal Euclidean distance between an initial iterate and a Lagrange multiplier and the issue of finding the largest step size for the fast gradient method. In addition, we argue that optimal preconditioning of the dual problem cannot be proven to decrease the smallest lower iteration bound. The findings of this paper are of importance in embedded optimization, for instance, in model predictive control
    corecore