189 research outputs found

    Projection methods in conic optimization

    Get PDF
    There exist efficient algorithms to project a point onto the intersection of a convex cone and an affine subspace. Those conic projections are in turn the work-horse of a range of algorithms in conic optimization, having a variety of applications in science, finance and engineering. This chapter reviews some of these algorithms, emphasizing the so-called regularization algorithms for linear conic optimization, and applications in polynomial optimization. This is a presentation of the material of several recent research articles; we aim here at clarifying the ideas, presenting them in a general framework, and pointing out important techniques

    Combining Lagrangian Decomposition and Excessive Gap Smoothing Technique for Solving Large-Scale Separable Convex Optimization Problems

    Full text link
    A new algorithm for solving large-scale convex optimization problems with a separable objective function is proposed. The basic idea is to combine three techniques: Lagrangian dual decomposition, excessive gap and smoothing. The main advantage of this algorithm is that it dynamically updates the smoothness parameters which leads to numerically robust performance. The convergence of the algorithm is proved under weak conditions imposed on the original problem. The rate of convergence is O(1k)O(\frac{1}{k}), where kk is the iteration counter. In the second part of the paper, the algorithm is coupled with a dual scheme to construct a switching variant of the dual decomposition. We discuss implementation issues and make a theoretical comparison. Numerical examples confirm the theoretical results.Comment: 29 pages, one figur

    A distributed primal-dual interior-point method for loosely coupled problems using ADMM

    Full text link
    In this paper we propose an efficient distributed algorithm for solving loosely coupled convex optimization problems. The algorithm is based on a primal-dual interior-point method in which we use the alternating direction method of multipliers (ADMM) to compute the primal-dual directions at each iteration of the method. This enables us to join the exceptional convergence properties of primal-dual interior-point methods with the remarkable parallelizability of ADMM. The resulting algorithm has superior computational properties with respect to ADMM directly applied to our problem. The amount of computations that needs to be conducted by each computing agent is far less. In particular, the updates for all variables can be expressed in closed form, irrespective of the type of optimization problem. The most expensive computational burden of the algorithm occur in the updates of the primal variables and can be precomputed in each iteration of the interior-point method. We verify and compare our method to ADMM in numerical experiments.Comment: extended version, 50 pages, 9 figure
    • …
    corecore