13 research outputs found

    Polynomial-time algorithms for linear programming based only on primal scaling and projected gradients of a potential function

    Get PDF
    Includes bibliographical references (p. 28-29).by Robert M. Freund

    Polynomial-time algorithms for linear programming based only on primal scaling and projected gradients of a potential function

    Get PDF
    Bibliography: p. 28-29.Robert M. Freund

    A potential-function reduction algorithm for solving a linear program directly from an infeasible "warm start"

    Get PDF
    Includes bibliographical references (p. 34-35).Robert M. Freund

    On the worst case complexity of potential reduction algorithms for linear programming

    Get PDF
    Includes bibliographical references (p. 16-17).Supported by a Presidential Young Investigator Award. DDM-9158118 Supported by Draper Laboratory.Dimitris Bertsimas and Xiaodong Luo

    An Active-Set Strategy in Interior Point Method for Linear Programming

    Get PDF
    We will present a potential reduction method for linear programming where only the constraints with relatively small dual slacks -active constraints- will be taken into account to form the ellipsoid constraint at each iteration of the process. The algorithm converges to the optimal feasible solution in O( √nL) iterations with the same polynomial bound with the full constraints case, where n is the number of variables and L is the data length. If a small portion of the constraints is active near the optimal solution, the computational cost to find the next direction of movement in one iteration will be fairly reduced by the proposed strategy. As a special case of this strategy, we will show that the interior point method can be managed by the basis factorization techniques of the simplex method coupled with a sequence of rank-one changes to matrices.This research was partially done in June 1990 while the author was visiting Department of Mathematics, University of Pisa

    A potential-function reduction algorithm for solving a linear program directly from an infeasible "warm start"

    Get PDF
    Includes bibliographical references (p. 34-35).Robert M. Freund

    Following a "balanced" trajectory from an infeasible point to an optimal linear programming solution with a polynomial-time algorithm

    Get PDF
    Includes bibliographical references.Supported by NSF, AFOSR and ONR through NSF grant. DMS-8920550 Supported by the MIT-NTU Collaboration Research Fund.Robert M. Freund

    A Potential Reduction Algorithm With User-Specified Phase I - Phase II Balance, for Solving a Linear Program from an Infeasible Warm Start

    Get PDF
    This paper develops a potential reduction algorithm for solving a linear-programming problem directly from a "warm start" initial point that is neither feasible nor optimal. The algorithm is of an "interior point" variety that seeks to reduce a single potential function which simultaneously coerces feasibility improvement (Phase I) and objective value improvement (Phase II). The key feature of the algorithm is the ability to specify beforehand the desired balance between infeasibility and nonoptimality in the following sense. Given a prespecified balancing parameter /3 > 0, the algorithm maintains the following Phase I - Phase II "/3-balancing constraint" throughout: (cTx- Z*) < /3TX, where cTx is the objective function, z* is the (unknown) optimal objective value of the linear program, and Tx measures the infeasibility of the current iterate x. This balancing constraint can be used to either emphasize rapid attainment of feasibility (set large) at the possible expense of good objective function values or to emphasize rapid attainment of good objective values (set /3 small) at the possible expense of a lower infeasibility gap. The algorithm exhibits the following advantageous features: (i) the iterate solutions monotonically decrease the infeasibility measure, (ii) the iterate solutions satisy the /3-balancing constraint, (iii) the iterate solutions achieve constant improvement in both Phase I and Phase II in O(n) iterations, (iv) there is always a possibility of finite termination of the Phase I problem, and (v) the algorithm is amenable to acceleration via linesearch of the potential function

    Pure adaptive search in global optimization

    Full text link
    Pure adaptive seach iteratively constructs a sequence of interior points uniformly distributed within the corresponding sequence of nested improving regions of the feasible space. That is, at any iteration, the next point in the sequence is uniformly distributed over the region of feasible space containing all points that are strictly superior in value to the previous points in the sequence. The complexity of this algorithm is measured by the expected number of iterations required to achieve a given accuracy of solution. We show that for global mathematical programs satisfying the Lipschitz condition, its complexity increases at most linearly in the dimension of the problem.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/47923/1/10107_2005_Article_BF01585710.pd
    corecore