21,628 research outputs found

    Parametric approaches to fractional programs: Analytical and empirical study

    Get PDF
    Fractional programming is used to model problems where the objective function is a ratio of functions. A parametric modeling approach provides effective technique for obtaining optimal solutions of these fractional programming problems. Although many heuristic algorithms have been proposed and assessed relative to each other, there are limited theoretical studies on the number of steps to obtain the solution. In this dissertation, I focus on the linear fractional combinatorial optimization problem, a special case of fractional programming where all functions in the objective function and constraints are linear and all variables are binary that model certain combinatorial structures. Two parametric algorithms are considered and the efficiency of the algorithms is investigated both theoretically and computationally. I develop the complexity bounds for these algorithms, and show that they can solve the linear fractional combinatorial optimization problem in polynomial time. In the computational study, the algorithms are used to solve fractional knapsack problem, fractional facility location problem, and fractional transportation problem by comparison to other algorithms (e.g., Newton\u27s method). The relative practical performance measured by the number of function calls demonstrates that the proposed algorithms are fast and robust for solving the linear fractional programs with discrete variables

    An interior point-proximal method of multipliers for linear positive semi-definite programming

    Get PDF
    In this paper we generalize the Interior Point-Proximal Method of Multipliers (IP-PMM) presented in Pougkakiotis and Gondzio (Comput Optim Appl 78:307–351, 2021. https://doi.org/10.1007/s10589-020-00240-9) for the solution of linear positive Semi-Definite Programming (SDP) problems, allowing inexactness in the solution of the associated Newton systems. In particular, we combine an infeasible Interior Point Method (IPM) with the Proximal Method of Multipliers (PMM) and interpret the algorithm (IP-PMM) as a primal-dual regularized IPM, suitable for solving SDP problems. We apply some iterations of an IPM to each sub-problem of the PMM until a satisfactory solution is found. We then update the PMM parameters, form a new IPM neighbourhood, and repeat this process. Given this framework, we prove polynomial complexity of the algorithm, under mild assumptions, and without requiring exact computations for the Newton directions. We furthermore provide a necessary condition for lack of strong duality, which can be used as a basis for constructing detection mechanisms for identifying pathological cases within IP-PMM.</p

    Projection methods in conic optimization

    Get PDF
    There exist efficient algorithms to project a point onto the intersection of a convex cone and an affine subspace. Those conic projections are in turn the work-horse of a range of algorithms in conic optimization, having a variety of applications in science, finance and engineering. This chapter reviews some of these algorithms, emphasizing the so-called regularization algorithms for linear conic optimization, and applications in polynomial optimization. This is a presentation of the material of several recent research articles; we aim here at clarifying the ideas, presenting them in a general framework, and pointing out important techniques

    Simplification Methods for Sum-of-Squares Programs

    Full text link
    A sum-of-squares is a polynomial that can be expressed as a sum of squares of other polynomials. Determining if a sum-of-squares decomposition exists for a given polynomial is equivalent to a linear matrix inequality feasibility problem. The computation required to solve the feasibility problem depends on the number of monomials used in the decomposition. The Newton polytope is a method to prune unnecessary monomials from the decomposition. This method requires the construction of a convex hull and this can be time consuming for polynomials with many terms. This paper presents a new algorithm for removing monomials based on a simple property of positive semidefinite matrices. It returns a set of monomials that is never larger than the set returned by the Newton polytope method and, for some polynomials, is a strictly smaller set. Moreover, the algorithm takes significantly less computation than the convex hull construction. This algorithm is then extended to a more general simplification method for sum-of-squares programming.Comment: 6 pages, 2 figure

    Subtropical Real Root Finding

    Get PDF
    We describe a new incomplete but terminating method for real root finding for large multivariate polynomials. We take an abstract view of the polynomial as the set of exponent vectors associated with sign information on the coefficients. Then we employ linear programming to heuristically find roots. There is a specialized variant for roots with exclusively positive coordinates, which is of considerable interest for applications in chemistry and systems biology. An implementation of our method combining the computer algebra system Reduce with the linear programming solver Gurobi has been successfully applied to input data originating from established mathematical models used in these areas. We have solved several hundred problems with up to more than 800000 monomials in up to 10 variables with degrees up to 12. Our method has failed due to its incompleteness in less than 8 percent of the cases

    Polynomial Linear Programming with Gaussian Belief Propagation

    Full text link
    Interior-point methods are state-of-the-art algorithms for solving linear programming (LP) problems with polynomial complexity. Specifically, the Karmarkar algorithm typically solves LP problems in time O(n^{3.5}), where nn is the number of unknown variables. Karmarkar's celebrated algorithm is known to be an instance of the log-barrier method using the Newton iteration. The main computational overhead of this method is in inverting the Hessian matrix of the Newton iteration. In this contribution, we propose the application of the Gaussian belief propagation (GaBP) algorithm as part of an efficient and distributed LP solver that exploits the sparse and symmetric structure of the Hessian matrix and avoids the need for direct matrix inversion. This approach shifts the computation from realm of linear algebra to that of probabilistic inference on graphical models, thus applying GaBP as an efficient inference engine. Our construction is general and can be used for any interior-point algorithm which uses the Newton method, including non-linear program solvers.Comment: 7 pages, 1 figure, appeared in the 46th Annual Allerton Conference on Communication, Control and Computing, Allerton House, Illinois, Sept. 200

    GMRES-Accelerated ADMM for Quadratic Objectives

    Full text link
    We consider the sequence acceleration problem for the alternating direction method-of-multipliers (ADMM) applied to a class of equality-constrained problems with strongly convex quadratic objectives, which frequently arise as the Newton subproblem of interior-point methods. Within this context, the ADMM update equations are linear, the iterates are confined within a Krylov subspace, and the General Minimum RESidual (GMRES) algorithm is optimal in its ability to accelerate convergence. The basic ADMM method solves a Îş\kappa-conditioned problem in O(Îş)O(\sqrt{\kappa}) iterations. We give theoretical justification and numerical evidence that the GMRES-accelerated variant consistently solves the same problem in O(Îş1/4)O(\kappa^{1/4}) iterations for an order-of-magnitude reduction in iterations, despite a worst-case bound of O(Îş)O(\sqrt{\kappa}) iterations. The method is shown to be competitive against standard preconditioned Krylov subspace methods for saddle-point problems. The method is embedded within SeDuMi, a popular open-source solver for conic optimization written in MATLAB, and used to solve many large-scale semidefinite programs with error that decreases like O(1/k2)O(1/k^{2}), instead of O(1/k)O(1/k), where kk is the iteration index.Comment: 31 pages, 7 figures. Accepted for publication in SIAM Journal on Optimization (SIOPT
    • …
    corecore