9 research outputs found
An Implementation of the Generalized Basis Reduction Algorithm for Integer Programming
In recent years many advances have been made in solution techniques for specially structured 0–1 integer programming problems. In contrast, very little progress has been made on solving general (mixed integer) problems. This, of course, is not true when viewed from the theoretical side: Lenstra (1981) made a major breakthrough, obtaining a polynomial-time algorithm when the number of integer variables is fixed. We discuss a practical implementation of a Lenstra-like algorithm, based on the generalized basis reduction method of Lovasz and Scarf (1988). This method allows us to avoid the ellipsoidal approximations required in Lenstra’s algorithm. We report on the solution of a number of small (but difficult) examples, up to 100 integer variables. Our computer code uses the linear programming optimizer CPlex as a subroutine to solve the linear programming problems that arise
A generalization of the integer linear infeasibility problem
Does a given system of linear equations with nonnegative constraints have an
integer solution? This is a fundamental question in many areas. In statistics
this problem arises in data security problems for contingency table data and
also is closely related to non-squarefree elements of Markov bases for sampling
contingency tables with given marginals. To study a family of systems with no
integer solution, we focus on a commutative semigroup generated by a finite
subset of and its saturation. An element in the difference of the
semigroup and its saturation is called a ``hole''. We show the necessary and
sufficient conditions for the finiteness of the set of holes. Also we define
fundamental holes and saturation points of a commutative semigroup. Then, we
show the simultaneous finiteness of the set of holes, the set of non-saturation
points, and the set of generators for saturation points. We apply our results
to some three- and four-way contingency tables. Then we will discuss the time
complexities of our algorithms.Comment: This paper has been published in Discrete Optimization, Volume 5,
Issue 1 (2008) p36-5
Barvinok's Rational Functions: Algorithms and Applications to Optimization, Statistics, and Algebra
The main theme of this dissertation is the study of the lattice points in a
rational convex polyhedron and their encoding in terms of Barvinok's short
rational functions. The first part of this thesis looks into theoretical
applications of these rational functions to Optimization, Statistics, and
Computational Algebra. The main theorem on Chapter 2 concerns the computation
of the \emph{toric ideal} of an integral matrix . We
encode the binomials belonging to the toric ideal associated with
using Barvinok's rational functions. If we fix and , this representation
allows us to compute a universal Gr\"obner basis and the reduced Gr\"obner
basis of the ideal , with respect to any term order, in polynomial time.
We derive a polynomial time algorithm for normal form computations which
replaces in this new encoding the usual reductions of the division algorithm.
Chapter 3 presents three ways to use Barvinok's rational functions to solve
Integer Programs.
The second part of the thesis is experimental and consists mainly of the
software package {\tt LattE}, the first implementation of Barvinok's algorithm.
We report on experiments with families of well-known rational polytopes:
multiway contingency tables, knapsack type problems, and rational polygons. We
also developed a new algorithm, {\em the homogenized Barvinok's algorithm} to
compute the generating function for a rational polytope. We showed that it runs
in polynomial time in fixed dimension. With the homogenized Barvinok's
algorithm, we obtained new combinatorial formulas: the generating function for
the number of magic squares and the generating function for the
number of magic cubes as rational functions.Comment: Thesi
Interior Point Cutting Plane Methods in Integer Programming
This thesis presents novel approaches that use interior point concepts in solving mixed integer programs. Particularly, we use the analytic center cutting plane method to improve three of the main components of the branch-and-bound algorithm: cutting planes, heuristics, and branching.
First, we present an interior point branch-and-cut algorithm for structured integer programs based on Benders decomposition. We explore using Benders decomposition in a branch-and-cut framework where the Benders cuts are generated using the analytic center cutting plane method. The algorithm is tested on two classes of problems: the capacitated facility
location problem and the multicommodity capacitated fixed charge network design
problem. For
the capacitated facility location problem, the proposed approach was on average
2.5 times faster than Benders-branch-and-cut and 11 times faster than classical
Benders decomposition. For the multicommodity capacitated fixed charge network
design problem, the proposed approach was 4 times faster than Benders-branch-and-cut while classical Benders decomposition failed to solve the
majority of the tested instances.
Second, we present a heuristic algorithm for mixed integer programs based on interior points. As integer solutions
are typically in the interior, we use the analytic center cutting plane method to search for integer feasible points within the interior
of the feasible set. The
algorithm searches along two line segments that connect
the weighted analytic center and two extreme points of the linear
programming relaxation. Candidate points are rounded and
tested for feasibility. Cuts aimed to improve the objective function
and restore feasibility are then added to displace the weighted
analytic center until a feasible integer solution is found. The algorithm is composed of three phases. In the first, points along
the two line segments are rounded gradually to find integer feasible
solutions. Then in an attempt to improve the quality of the solutions, the cut related to the bound constraint is updated
and a new weighted analytic center is found. Upon failing to find a
feasible integer solution, a second phase is started where cuts
related to the violated feasibility constraints are added. As a last resort, the
algorithm solves a minimum distance problem in a third phase. For all the tested instances, the algorithm finds good quality feasible solutions in the first two phases and the third phase is never called.
Finally, we present a new approach to generate good general branching constraints based on the shape of the polyhedron. Our approach is based on approximating the polyhedron using an inscribed ellipsoid. We use Dikin's ellipsoid which we calculate using the analytic center. We propose to use the disjunction that has a minimum width on the ellipsoid. We use the fact that the width of the ellipsoid in a given direction has a closed form solution in order to formulate a quadratic problem whose optimal solution is a thin direction of the ellipsoid. While solving a quadratic problem at each node of the branch-and-bound tree is impractical, we use a local search heuristic for its solution. Computational testing conducted on hard integer problems from MIPLIB and CORAL showed that the proposed approach outperforms classical branching
An Implementation of the Generalized Basis Reduction Algorithm for Integer Programming
In recent years many advances have been made in solution techniques for specially structured 0-1 integer programming problems. In contrast, very little progress has been made on solving general (mixed integer) problems. This, of course, is not true when viewed from the theoretical side: Lenstra (1981) made a major breakthrough, obtaining a polynomial-time algorithm when the number of integer variables is fixed. We discuss a practical implementation of a Lenstra-like algorithm, based on the generalized basis reduction method of Lovasz and Scarf (1988).This method allows us to avoid the ellipsoidal approximations required in Lenstra's algorithm. We report on the solution of a number of small (but difficult) examples, up to 100 integer variables. Our computer code uses the linear programming optimizer CPlex as a subroutine to solve the linear programming problems that arise.Linear programming, mixed integer problems