3,665 research outputs found

    Projection methods in conic optimization

    Get PDF
    There exist efficient algorithms to project a point onto the intersection of a convex cone and an affine subspace. Those conic projections are in turn the work-horse of a range of algorithms in conic optimization, having a variety of applications in science, finance and engineering. This chapter reviews some of these algorithms, emphasizing the so-called regularization algorithms for linear conic optimization, and applications in polynomial optimization. This is a presentation of the material of several recent research articles; we aim here at clarifying the ideas, presenting them in a general framework, and pointing out important techniques

    Adapting the interior point method for the solution of LPs on serial, coarse grain parallel and massively parallel computers

    Get PDF
    In this paper we describe a unified scheme for implementing an interior point algorithm (IPM) over a range of computer architectures. In the inner iteration of the IPM a search direction is computed using Newton's method. Computationally this involves solving a sparse symmetric positive definite (SSPD) system of equations. The choice of direct and indirect methods for the solution of this system, and the design of data structures to take advantage of serial, coarse grain parallel and massively parallel computer architectures, are considered in detail. We put forward arguments as to why integration of the system within a sparse simplex solver is important and outline how the system is designed to achieve this integration

    Experimental investigations in combining primal dual interior point method and simplex based LP solvers

    Get PDF
    The use of a primal dual interior point method (PD) based optimizer as a robust linear programming (LP) solver is now well established. Instead of replacing the sparse simplex algorithm (SSX), the PD is increasingly seen as complementing it. The progress of PD iterations is not hindered by the degeneracy or the stalling problem of the SSX, indeed it reaches the 'near optimum' solution very quickly. The SSX algorithm, in contrast, is not affected by the boundary conditions which slow down the convergence of the PD. If the solution to the LP problem is non unique, the PD algorithm converges to an interior point of the solution set while the SSX algorithm finds an extreme point solution. To take advantage of the attractive properties of both the PD and the SSX, we have designed a hybrid framework whereby cross over from PD to SSX can take place at any stage of the PD optimization run. The cross over to SSX involves the partition of the PD solution set to active and dormant variables. In this paper we examine the practical difficulties in partitioning the solution set, we discuss the reliability of predicting the solution set partition before optimality is reached and report the results of combining exact and inexact prediction with SSX basis recovery

    Parameter Selection and Pre-Conditioning for a Graph Form Solver

    Full text link
    In a recent paper, Parikh and Boyd describe a method for solving a convex optimization problem, where each iteration involves evaluating a proximal operator and projection onto a subspace. In this paper we address the critical practical issues of how to select the proximal parameter in each iteration, and how to scale the original problem variables, so as the achieve reliable practical performance. The resulting method has been implemented as an open-source software package called POGS (Proximal Graph Solver), that targets multi-core and GPU-based systems, and has been tested on a wide variety of practical problems. Numerical results show that POGS can solve very large problems (with, say, more than a billion coefficients in the data), to modest accuracy in a few tens of seconds. As just one example, a radiation treatment planning problem with around 100 million coefficients in the data can be solved in a few seconds, as compared to around one hour with an interior-point method.Comment: 28 pages, 1 figure, 1 open source implementatio
    • 

    corecore