95 research outputs found

    A faster prediction-correction framework for solving convex optimization problems

    Full text link
    He and Yuan's prediction-correction framework [SIAM J. Numer. Anal. 50: 700-709, 2012] is able to provide convergent algorithms for solving convex optimization problems at a rate of O(1/t)O(1/t) in both ergodic and pointwise senses. This paper presents a faster prediction-correction framework at a rate of O(1/t)O(1/t) in the non-ergodic sense and O(1/t2)O(1/t^2) in the pointwise sense, {\it without any additional assumptions}. Interestingly, it provides a faster algorithm for solving {\it multi-block} separable convex optimization problems with linear equality or inequality constraints

    Decentralized Proximal Method of Multipliers for Convex Optimization with Coupled Constraints

    Full text link
    In this paper, a decentralized proximal method of multipliers (DPMM) is proposed to solve constrained convex optimization problems over multi-agent networks, where the local objective of each agent is a general closed convex function, and the constraints are coupled equalities and inequalities. This algorithm strategically integrates the dual decomposition method and the proximal point algorithm. One advantage of DPMM is that subproblems can be solved inexactly and in parallel by agents at each iteration, which relaxes the restriction of requiring exact solutions to subproblems in many distributed constrained optimization algorithms. We show that the first-order optimality residual of the proposed algorithm decays to 00 at a rate of o(1/k)o(1/k) under general convexity. Furthermore, if a structural assumption for the considered optimization problem is satisfied, the sequence generated by DPMM converges linearly to an optimal solution. In numerical simulations, we compare DPMM with several existing algorithms using two examples to demonstrate its effectiveness

    Implementation of model predictive control for tracking in embedded systems using a sparse extended ADMM algorithm

    Get PDF
    This article presents a sparse, low-memory footprint optimization algorithm for the implementation of model predictive control (MPC) for tracking formulation in embedded systems. This MPC formulation has several advantages over standard MPC formulations, such as an increased domain of attraction and guaranteed recursive feasibility even in the event of a sudden reference change. However, this comes at the expense of the addition of a small amount of decision variables to the MPC's optimization problem that complicates the structure of its matrices. We propose a sparse optimization algorithm, based on an extension of the alternating direction method of multipliers, that exploits the structure of this particular MPC formulation. We describe the controller formulation and detail how its structure is exploited by means of the aforementioned optimization algorithm. We show closed-loop simulations comparing the proposed solver against other solvers and approaches from the literature
    corecore