857 research outputs found

    Topics in exact precision mathematical programming

    Get PDF
    The focus of this dissertation is the advancement of theory and computation related to exact precision mathematical programming. Optimization software based on floating-point arithmetic can return suboptimal or incorrect resulting because of round-off errors or the use of numerical tolerances. Exact or correct results are necessary for some applications. Implementing software entirely in rational arithmetic can be prohibitively slow. A viable alternative is the use of hybrid methods that use fast numerical computation to obtain approximate results that are then verified or corrected with safe or exact computation. We study fast methods for sparse exact rational linear algebra, which arises as a bottleneck when solving linear programming problems exactly. Output sensitive methods for exact linear algebra are studied. Finally, a new method for computing valid linear programming bounds is introduced and proven effective as a subroutine for solving mixed-integer linear programming problems exactly. Extensive computational results are presented for each topic.Ph.D.Committee Chair: Dr. William J. Cook; Committee Member: Dr. George Nemhauser; Committee Member: Dr. Robin Thomas; Committee Member: Dr. Santanu Dey; Committee Member: Dr. Shabbir Ahmed; Committee Member: Dr. Zonghao G

    A branch, price, and cut approach to solving the maximum weighted independent set problem

    Get PDF
    The maximum weight-independent set problem (MWISP) is one of the most well-known and well-studied NP-hard problems in the field of combinatorial optimization. In the first part of the dissertation, I explore efficient branch-and-price (B&P) approaches to solve MWISP exactly. B&P is a useful integer-programming tool for solving NP-hard optimization problems. Specifically, I look at vertex- and edge-disjoint decompositions of the underlying graph. MWISPâÂÂs on the resulting subgraphs are less challenging, on average, to solve. I use the B&P framework to solve MWISP on the original graph G using these specially constructed subproblems to generate columns. I demonstrate that vertex-disjoint partitioning scheme gives an effective approach for relatively sparse graphs. I also show that the edge-disjoint approach is less effective than the vertex-disjoint scheme because the associated DWD reformulation of the latter entails a slow rate of convergence. In the second part of the dissertation, I address convergence properties associated with Dantzig-Wolfe Decomposition (DWD). I discuss prevalent methods for improving the rate of convergence of DWD. I also implement specific methods in application to the edge-disjoint B&P scheme and show that these methods improve the rate of convergence. In the third part of the dissertation, I focus on identifying new cut-generation methods within the B&P framework. Such methods have not been explored in the literature. I present two new methodologies for generating generic cutting planes within the B&P framework. These techniques are not limited to MWISP and can be used in general applications of B&P. The first methodology generates cuts by identifying faces (facets) of subproblem polytopes and lifting associated inequalities; the second methodology computes Lift-and-Project (L&P) cuts within B&P. I successfully demonstrate the feasibility of both approaches and present preliminary computational tests of each

    Polyhedral techniques in combinatorial optimization II: computations

    Get PDF
    Combinatorial optimization problems appear in many disciplines ranging from management and logistics to mathematics, physics, and chemistry. These problems are usually relatively easy to formulate mathematically, but most of them are computationally hard due to the restriction that a subset of the variables have to take integral values. During the last two decades there has been a remarkable progress in techniques based on the polyhedral description of combinatorial problems. leading to a large increase in the size of several problem types that can be solved. The basic idea behind polyhedral techniques is to derive a good linear formulation of the set of solutions by identifying linear inequalities that can be proved to be necessary in the description of the convex hull of feasible solutions. Ideally we can then solve the problem as a linear programming problem, which can be done efficiently. The purpose of this manuscript is to give an overview of the developments in polyhedral theory, starting with the pioneering work by Dantzig, Fulkerson and Johnson on the traveling salesman problem, and by Gomory on integer programming. We also present some modern applications, and computational experience

    Polyhedral techniques in combinatorial optimization II: applications and computations

    Get PDF
    The polyhedral approach is one of the most powerful techniques available for solving hard combinatorial optimization problems. The main idea behind the technique is to consider the linear relaxation of the integer combinatorial optimization problem, and try to iteratively strengthen the linear formulation by adding violated strong valid inequalities, i.e., inequalities that are violated by the current fractional solution but satisfied by all feasible solutions, and that define high-dimensional faces, preferably facets, of the convex hull of feasible solutions. If we have the complete description of the convex hull of feasible solutions at hand all extreme points of this formulation are integral, which means that we can solve the problem as a linear programming problem. Linear programming problems are known to be computationally easy. In Part 1 of this article we discuss theoretical aspects of polyhedral techniques. Here we will mainly concentrate on the computational aspects. In particular we discuss how polyhedral results are used in cutting plane algorithms. We also consider a few theoretical issues not treated in Part 1, such as techniques for proving that a certain inequality is facet defining, and that a certain linear formulation gives a complete description of the convex hull of feasible solutions. We conclude the article by briefly mentioning some alternative techniques for solving combinatorial optimization problems

    Processing second-order stochastic dominance models using cutting-plane representations

    Get PDF
    This is the post-print version of the Article. The official published version can be accessed from the links below. Copyright @ 2011 Springer-VerlagSecond-order stochastic dominance (SSD) is widely recognised as an important decision criterion in portfolio selection. Unfortunately, stochastic dominance models are known to be very demanding from a computational point of view. In this paper we consider two classes of models which use SSD as a choice criterion. The first, proposed by Dentcheva and Ruszczyński (J Bank Finance 30:433–451, 2006), uses a SSD constraint, which can be expressed as integrated chance constraints (ICCs). The second, proposed by Roman et al. (Math Program, Ser B 108:541–569, 2006) uses SSD through a multi-objective formulation with CVaR objectives. Cutting plane representations and algorithms were proposed by Klein Haneveld and Van der Vlerk (Comput Manage Sci 3:245–269, 2006) for ICCs, and by Künzi-Bay and Mayer (Comput Manage Sci 3:3–27, 2006) for CVaR minimization. These concepts are taken into consideration to propose representations and solution methods for the above class of SSD based models. We describe a cutting plane based solution algorithm and outline implementation details. A computational study is presented, which demonstrates the effectiveness and the scale-up properties of the solution algorithm, as applied to the SSD model of Roman et al. (Math Program, Ser B 108:541–569, 2006).This study was funded by OTKA, Hungarian National Fund for Scientific Research, project 47340; by Mobile Innovation Centre, Budapest University of Technology, project 2.2; Optirisk Systems, Uxbridge, UK and by BRIEF (Brunel University Research Innovation and Enterprise Fund)
    corecore