10 research outputs found

    On the convergence of the affine-scaling algorithm

    Get PDF
    Cover title.Includes bibliographical references (p. 20-22).Research partially supported by the National Science Foundation. NSF-ECS-8519058 Research partially supported by the U.S. Army Research Office. DAAL03-86-K-0171 Research partially supported by the Science and Engineering Research Board of McMaster University.by Paul Tseng and Zhi-Quan Luo

    Analysis of some interior point continuous trajectories for convex programming

    Get PDF
    In this paper, we analyse three interior point continuous trajectories for convex programming with general linear constraints. The three continuous trajectories are derived from the primal–dual path-following method, the primal–dual affine scaling method and the central path, respectively. Theoretical properties of the three interior point continuous trajectories are fully studied. The optimality and convergence of all three interior point continuous trajectories are obtained for any interior feasible point under some mild conditions. In particular, with proper choice of some parameters, the convergence for all three interior point continuous trajectories does not require the strict complementarity or the analyticity of the objective function. These results are new in the literature

    A simple proof of a primal affine scaling method

    Full text link
    In this paper, we present a simpler proof of the result of Tsuchiya and Muramatsu on the convergence of the primal affine scaling method. We show that the primal sequence generated by the method converges to the interior of the optimum face and the dual sequence to the analytic center of the optimal dual face, when the step size implemented in the procedure is bounded by 2/3. We also prove the optimality of the limit of the primal sequence for a slightly larger step size of 2 q /(3 q −1), where q is the number of zero variables in the limit. We show this by proving the dual feasibility of a cluster point of the dual sequence.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/44263/1/10479_2005_Article_BF02206821.pd

    The primal power affine scaling method

    Full text link
    In this paper, we present a variant of the primal affine scaling method, which we call the primal power affine scaling method. This method is defined by choosing a real r >0.5, and is similar to the power barrier variant of the primal-dual homotopy methods considered by den Hertog, Roos and Terlaky and Sheu and Fang. Here, we analyze the methods for r >1. The analysis for 0.50 2/(2 r -1) and with a variable asymptotic step size α k uniformly bounded away from 2/(2 r +1), the primal sequence converges to the relative interior of the optimal primal face, and the dual sequence converges to the power center of the optimal dual face. We also present an accelerated version of the method. We show that the two-step superlieear convergence rate of the method is 1+ r /( r +1), while the three-step convergence rate is 1+ 3 r /( r +2). Using the measure of Ostrowski, we note thet the three-step method for r =4 is more efficient than the two-step quadratically convergent method, which is the limit of the two-step method as r approaches infinity.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/44270/1/10479_2005_Article_BF02206824.pd

    Two dimensional search algorithms for linear programming

    Get PDF
    Linear programming is one of the most important classes of optimization problems. These mathematical models have been used by academics and practitioners to solve numerous real world applications. Quickly solving linear programs impacts decision makers from both the public and private sectors. Substantial research has been performed to solve this class of problems faster, and the vast majority of the solution techniques can be categorized as one dimensional search algorithms. That is, these methods successively move from one solution to another solution by solving a one dimensional subspace linear program at each iteration. This dissertation proposes novel algorithms that move between solutions by repeatedly solving a two dimensional subspace linear program. Computational experiments demonstrate the potential of these newly developed algorithms and show an average improvement of nearly 25% in solution time when compared to the corresponding one dimensional search version. This dissertation\u27s research creates the core concept of these two dimensional search algorithms, which is a fast technique to determine an optimal basis and an optimal solution to linear programs with only two variables. This method, called the slope algorithm, compares the slope formed by the objective function with the slope formed by each constraint to determine a pair of constraints that intersect at an optimal basis and an optimal solution. The slope algorithm is implemented within a simplex framework to perform two dimensional searches. This results in the double pivot simplex method. Differently than the well-known simplex method, the double pivot simplex method simultaneously pivots up to two basic variables with two nonbasic variables at each iteration. The theoretical computational complexity of the double pivot simplex method is identical to the simplex method. Computational results show that this new algorithm reduces the number of pivots to solve benchmark instances by approximately 40% when compared to the classical implementation of the simplex method, and 20% when compared to the primal simplex implementation of CPLEX, a high performance mathematical programming solver. Solution times of some random linear programs are also improved by nearly 25% on average. This dissertation also presents a novel technique, called the ratio algorithm, to find an optimal basis and an optimal solution to linear programs with only two constraints. When the ratio algorithm is implemented within a simplex framework to perform two dimensional searches, it results in the double pivot dual simplex method. In this case, the double pivot dual simplex method behaves similarly to the dual simplex method, but two variables are exchanged at every step. Two dimensional searches are also implemented within an interior point framework. This dissertation creates a set of four two dimensional search interior point algorithms derived from primal and dual affine scaling and logarithmic barrier search directions. Each iteration of these techniques quickly solves a two dimensional subspace linear program formed by the intersection of two search directions and the feasible region of the linear program. Search directions are derived by orthogonally partitioning the objective function vector, which allows these novel methods to improve the objective function value at each step by at least as much as the corresponding one dimensional search version. Computational experiments performed on benchmark linear programs demonstrate that these two dimensional search interior point algorithms improve the average solution time by approximately 12% and the average number of iterations by 15%. In conclusion, this dissertation provides a change of paradigm in linear programming optimization algorithms. Implementing two dimensional searches within both a simplex and interior point framework typically reduces the computational time and number of iterations to solve linear programs. Furthermore, this dissertation sets the stage for future research topics in multidimensional search algorithms to solve not only linear programs but also other critical classes of optimization methods. Consequently, this dissertation\u27s research can become one of the first steps to change how commercial and open source mathematical programming software will solve optimization problems

    Une synthèse sur les méthodes du point intérieur

    Get PDF
    En 1984, Karmarkar a publié un algorithme de type point intérieur pour la programmation linéaire. Il a affirmé qu'en plus d'être de complexité polynomiale, il est plus efficace que le simplexe surtout pour des problèmes de grande taille. Ainsi une recherche s'est déclenchée dans les méthodes de point intérieur et a donné comme résultat une grande variété d'algorithmes de ce type qui peuvent être classés en quatre catégories de méthodes : méthodes projectives, méthodes affinées, méthodes du potentiel et méthodes de la trajectoire centrale. Dans ce travail, nous allons présenter une synthèse de ces méthodes incluant les derniers développements dans ce domaine
    corecore