19 research outputs found

    Convergence analysis of an Inexact Infeasible Interior Point method for Semidefinite Programming

    Get PDF
    In this paper we present an extension to SDP of the well known infeasible Interior Point method for linear programming of Kojima,Megiddo and Mizuno (A primal-dual infeasible-interior-point algorithm for Linear Programming, Math. Progr., 1993). The extension developed here allows the use of inexact search directions; i.e., the linear systems defining the search directions can be solved with an accuracy that increases as the solution is approached. A convergence analysis is carried out and the global convergence of the method is prove

    An infeasible interior-point method for the P∗P_*-matrix linear complementarity‎ ‎problem based on a trigonometric kernel function with full-Newton‎ ‎step

    Get PDF
    An infeasible interior-point algorithm for solving the‎ ‎P∗P_*-matrix linear complementarity problem based on a kernel‎ ‎function with trigonometric barrier term is analyzed‎. ‎Each (main)‎ ‎iteration of the algorithm consists of a feasibility step and‎ ‎several centrality steps‎, ‎whose feasibility step is induced by a‎ ‎trigonometric kernel function‎. ‎The complexity result coincides with‎ ‎the best result for infeasible interior-point methods for‎ ‎P∗P_*-matrix linear complementarity problem

    Convergence of infeasible-interior-point methods for self-scaled conic programming

    Full text link
    Convergence of infeasible-interior-point methods for self-scaled conic programmin

    An infeasible-start algorithm for linear programming whose complexity depends on the distance from the starting point to the optimal solution

    Get PDF
    Includes bibliographical references.Supported in part by the MIT-NTU Collaboration Agreement.Robert M. Freund

    Finding a point in the relative interior of a polyhedron

    Get PDF
    A new initialization or `Phase I' strategy for feasible interior point methods for linear programming is proposed that computes a point on the primal-dual central path associated with the linear program. Provided there exist primal-dual strictly feasible points - an all-pervasive assumption in interior point method theory that implies the existence of the central path - our initial method (Algorithm 1) is globally Q-linearly and asymptotically Q-quadratically convergent, with a provable worst-case iteration complexity bound. When this assumption is not met, the numerical behaviour of Algorithm 1 is highly disappointing, even when the problem is primal-dual feasible. This is due to the presence of implicit equalities, inequality constraints that hold as equalities at all the feasible points. Controlled perturbations of the inequality constraints of the primal-dual problems are introduced - geometrically equivalent to enlarging the primal-dual feasible region and then systematically contracting it back to its initial shape - in order for the perturbed problems to satisfy the assumption. Thus Algorithm 1 can successfully be employed to solve each of the perturbed problems.\ud We show that, when there exist primal-dual strictly feasible points of the original problems, the resulting method, Algorithm 2, finds such a point in a finite number of changes to the perturbation parameters. When implicit equalities are present, but the original problem and its dual are feasible, Algorithm 2 asymptotically detects all the primal-dual implicit equalities and generates a point in the relative interior of the primal-dual feasible set. Algorithm 2 can also asymptotically detect primal-dual infeasibility. Successful numerical experience with Algorithm 2 on linear programs from NETLIB and CUTEr, both with and without any significant preprocessing of the problems, indicates that Algorithm 2 may be used as an algorithmic preprocessor for removing implicit equalities, with theoretical guarantees of convergence

    Row generation techniques for approximate solution of linear programming problems

    Get PDF
    Ankara : The Department of Industrial Engineering and the Institute of Engineering and Science of Bilkent University, 2010.Thesis (Master's) -- Bilkent University, 2010.Includes bibliographical references leaves 69-77.In this study, row generation techniques are applied on general linear programming problems with a very large number of constraints with respect to the problem dimension. A lower bound is obtained for the change in the objective value caused by the generation of a specific row. To achieve row selection that results in a large shift in the feasible region and the objective value at each row generation iteration, the lower bound is used in the comparison of row generation candidates. For a warm-start to the solution procedure, an effective selection of the subset of constraints that constitutes the initial LP is considered. Several strategies are discussed to form such a small subset of constraints so as to obtain an initial solution close to the feasible region of the original LP. Approximation schemes are designed and compared to make possible the termination of row generation at a solution in the proximity of an optimal solution of the input LP. The row generation algorithm presented in this study, which is enhanced with a warm-start strategy and an approximation scheme is implemented and tested for computation time and the number of rows generated. Two efficient primal simplex method variants are used for benchmarking computation times, and the row generation algorithm appears to perform better than at least one of them especially when number of constraints is large.Paç, A BurakM.S

    Interior point methods iteration reduction with continued iteration

    Get PDF
    Orientadores: Aurelio Ribeiro Leite de Oliveira, Carla Taviane Lucke da Silva GhidiniTese (doutorado) - Universidade Estadual de Campinas, Instituto de MatemĂĄtica EstatĂ­stica e Computação CientĂ­ficaResumo: Os mĂ©todos de pontos interiores tĂȘm sido extensivamente utilizado para resolver os problemas de programação linear de grande porte. Entre todas as variaçÔes de mĂ©todos de pontos interiores, o preditor corretor com mĂșltiplas correçÔes de centralidade apresenta um grande destaque, devido a sua eficiĂȘncia e rĂĄpida convergĂȘncia. Este mĂ©todo, necessita resolver sistemas lineares, em cada iteração, para determinar a direção de busca, correspondendo ao passo que requer mais tempo de processamento. Neste trabalho, a iteração continuada Ă© apresentada e introduzida ao mĂ©todo preditor corretor com mĂșltiplas correçÔes de centralidade, com objetivo reduzir o nĂșmero de iteraçÔes e o tempo computacional deste mĂ©todo para determinar a solução de problemas de programação linear. A iteração continuada consiste em determinar uma nova direção combinada com a direção de busca dos mĂ©todos de pontos interiores. Apresentamos duas novas direçÔes continuadas e duas formas diferentes de utilizĂĄ-las, propondo um aumento no tamanho dos passos a serem dados na direção de busca, acelerando a convergĂȘncia do mĂ©todo. AlĂ©m disso, utilizamos o algoritmo de ajustamento Ăłtimo para p coordenadas para determinar melhores pontos iniciais para o mĂ©todo de pontos interiores em conjunto com a iteração continuada. Experimentos computacionais foram realizados e os resultados obtidos ao incorporar a iteração continuada com o mĂ©todo de pontos interiores preditor corretor e as mĂșltiplas correçÔes de centralidade sĂŁo superiores Ă  abordagem tradicional. A utilização do algoritmo de ajustamento Ăłtimo para p coordenadas na nova abordagem leva a resultados semelhantesAbstract: The interior point methods have been extensively used to solve large-scale linear programming problems. Among all variations of interior point methods, the predictor corrector with multiple centrality corrections is the method of choice due to its efficiency and fast convergence. This method requires solving linear systems, to determine the search direction corresponding to the step that requires more processing time, in each iteration. In this work, the continued iteration is presented and introduced to the predictor corrector method with multiple centrality corrections, in order to reduce the number of iterations and the computational time to determine the linear programming problems solution. The continued iteration consists of determining a new direction combined with the search direction of the interior point methods. Two new continued directions and two different ways of being used, increasing of the steps sizes taken in the search direction, speeding up the convergence of the method. In addition, we use the optimal adjustment algorithm for p coordinates to determine the best starting point for the interior point method in conjunction with the continued iteration. Computational experiments were performed and the results achieved by incorporating the continued iteration in the predictor corrector interior point method and multiple centrality corrections outperform the traditional approach. Using the optimal adjustment algorithm for p coordinates leads to similar resultsDoutoradoMatematica AplicadaDoutora em MatemĂĄtica Aplicada2011/20623-7FAPES

    Algorithms for linear and convex feasibility problems: A brief study of iterative projection, localization and subgradient methods

    Get PDF
    Ankara : Department of Industrial Engineering and Institute of Engineering and Sciences, Bilkent Univ., 1998.Thesis (Ph.D.) -- Bilkent University, 1998.Includes bibliographical references leaves 86-93.Several algorithms for the feasibility problem are investigated. For linear systems, a number of different block projections approaches have been implemented and compared. The parallel algorithm of Yang and Murty is observed to be much slower than its sequential counterpart. Modification of the step size has allowed us to obtain a much better algorithm, exhibiting considerable speedup when compared to the sequential algorithm. For the convex feasibility problem an approach combining rectangular cutting planes and subgradients is developed. Theoretical convergence results are established for both ca^es. Two broad classes of image recovery problems are formulated as linear feasibility problems and successfully solved with the algorithms developed.ÖzaktaƟ, HakanPh.D

    Two dimensional search algorithms for linear programming

    Get PDF
    Linear programming is one of the most important classes of optimization problems. These mathematical models have been used by academics and practitioners to solve numerous real world applications. Quickly solving linear programs impacts decision makers from both the public and private sectors. Substantial research has been performed to solve this class of problems faster, and the vast majority of the solution techniques can be categorized as one dimensional search algorithms. That is, these methods successively move from one solution to another solution by solving a one dimensional subspace linear program at each iteration. This dissertation proposes novel algorithms that move between solutions by repeatedly solving a two dimensional subspace linear program. Computational experiments demonstrate the potential of these newly developed algorithms and show an average improvement of nearly 25% in solution time when compared to the corresponding one dimensional search version. This dissertation\u27s research creates the core concept of these two dimensional search algorithms, which is a fast technique to determine an optimal basis and an optimal solution to linear programs with only two variables. This method, called the slope algorithm, compares the slope formed by the objective function with the slope formed by each constraint to determine a pair of constraints that intersect at an optimal basis and an optimal solution. The slope algorithm is implemented within a simplex framework to perform two dimensional searches. This results in the double pivot simplex method. Differently than the well-known simplex method, the double pivot simplex method simultaneously pivots up to two basic variables with two nonbasic variables at each iteration. The theoretical computational complexity of the double pivot simplex method is identical to the simplex method. Computational results show that this new algorithm reduces the number of pivots to solve benchmark instances by approximately 40% when compared to the classical implementation of the simplex method, and 20% when compared to the primal simplex implementation of CPLEX, a high performance mathematical programming solver. Solution times of some random linear programs are also improved by nearly 25% on average. This dissertation also presents a novel technique, called the ratio algorithm, to find an optimal basis and an optimal solution to linear programs with only two constraints. When the ratio algorithm is implemented within a simplex framework to perform two dimensional searches, it results in the double pivot dual simplex method. In this case, the double pivot dual simplex method behaves similarly to the dual simplex method, but two variables are exchanged at every step. Two dimensional searches are also implemented within an interior point framework. This dissertation creates a set of four two dimensional search interior point algorithms derived from primal and dual affine scaling and logarithmic barrier search directions. Each iteration of these techniques quickly solves a two dimensional subspace linear program formed by the intersection of two search directions and the feasible region of the linear program. Search directions are derived by orthogonally partitioning the objective function vector, which allows these novel methods to improve the objective function value at each step by at least as much as the corresponding one dimensional search version. Computational experiments performed on benchmark linear programs demonstrate that these two dimensional search interior point algorithms improve the average solution time by approximately 12% and the average number of iterations by 15%. In conclusion, this dissertation provides a change of paradigm in linear programming optimization algorithms. Implementing two dimensional searches within both a simplex and interior point framework typically reduces the computational time and number of iterations to solve linear programs. Furthermore, this dissertation sets the stage for future research topics in multidimensional search algorithms to solve not only linear programs but also other critical classes of optimization methods. Consequently, this dissertation\u27s research can become one of the first steps to change how commercial and open source mathematical programming software will solve optimization problems
    corecore