55,197 research outputs found

    An Algorithmic Theory of Integer Programming

    Full text link
    We study the general integer programming problem where the number of variables nn is a variable part of the input. We consider two natural parameters of the constraint matrix AA: its numeric measure aa and its sparsity measure dd. We show that integer programming can be solved in time g(a,d)poly(n,L)g(a,d)\textrm{poly}(n,L), where gg is some computable function of the parameters aa and dd, and LL is the binary encoding length of the input. In particular, integer programming is fixed-parameter tractable parameterized by aa and dd, and is solvable in polynomial time for every fixed aa and dd. Our results also extend to nonlinear separable convex objective functions. Moreover, for linear objectives, we derive a strongly-polynomial algorithm, that is, with running time g(a,d)poly(n)g(a,d)\textrm{poly}(n), independent of the rest of the input data. We obtain these results by developing an algorithmic framework based on the idea of iterative augmentation: starting from an initial feasible solution, we show how to quickly find augmenting steps which rapidly converge to an optimum. A central notion in this framework is the Graver basis of the matrix AA, which constitutes a set of fundamental augmenting steps. The iterative augmentation idea is then enhanced via the use of other techniques such as new and improved bounds on the Graver basis, rapid solution of integer programs with bounded variables, proximity theorems and a new proximity-scaling algorithm, the notion of a reduced objective function, and others. As a consequence of our work, we advance the state of the art of solving block-structured integer programs. In particular, we develop near-linear time algorithms for nn-fold, tree-fold, and 22-stage stochastic integer programs. We also discuss some of the many applications of these classes.Comment: Revision 2: - strengthened dual treedepth lower bound - simplified proximity-scaling algorith

    Improvement of the branch and bound algorithm for solving the knapsack linear integer problem

    Get PDF
    The paper presents a new reformulation approach to reduce the complexity of a branch and bound algorithm for solving the knapsack linear integer problem. The branch and bound algorithm in general relies on the usual strategy of first relaxing the integer problem into a linear programing (LP) model. If the linear programming optimal solution is integer then, the optimal solution to the integer problem is available. If the linear programming optimal solution is not integer, then a variable with a fractional value is selected to create two sub-problems such that part of the feasible region is discarded without eliminating any of the feasible integer solutions. The process is repeated on all variables with fractional values until an integer solution is found. In this approach variable sum and additional constraints are generated and added to the original problem before solving. In order to do this the objective bound of knapsack problem is quickly determined. The bound is then used to generate a set of variable sum limits and four additional constraints. From the variable sum limits, initial sub-problems are constructed and solved. The optimal solution is then obtained as the best solution from all the sub-problems in terms of the objective value. The proposed procedure results in sub-problems that have reduced complexity and easier to solve than the original problem in terms of numbers of branch and bound iterations or sub-problems.The knapsack problem is a special form of the general linear integer problem. There are so many types of knapsack problems. These include the zero-one, multiple, multiple-choice, bounded, unbounded, quadratic, multi-objective, multi-dimensional, collapsing zero-one and set union knapsack problems. The zero-one knapsack problem is one in which the variables assume 0 s and 1 s only. The reason is that an item can be chosen or not chosen. In other words there is no way it is possible to have fractional amounts or items. This is the easiest class of the knapsack problems and is the only one that can be solved in polynomial by interior point algorithms and in pseudo-polynomial time by dynamic programming approaches. The multiple-choice knapsack problem is a generalization of the ordinary knapsack problem, where the set of items is partitioned into classes. The zero-one choice of taking an item is replaced by the selection of exactly one item out of each class of item

    Improvement of the branch and bound algorithm for solving the knapsack linear integer problem

    Get PDF
    The paper presents a new reformulation approach to reduce the complexity of a branch and bound algorithm for solving the knapsack linear integer problem. The branch and bound algorithm in general relies on the usual strategy of first relaxing the integer problem into a linear programing (LP) model. If the linear programming optimal solution is integer then, the optimal solution to the integer problem is available. If the linear programming optimal solution is not integer, then a variable with a fractional value is selected to create two sub-problems such that part of the feasible region is discarded without eliminating any of the feasible integer solutions. The process is repeated on all variables with fractional values until an integer solution is found. In this approach variable sum and additional constraints are generated and added to the original problem before solving. In order to do this the objective bound of knapsack problem is quickly determined. The bound is then used to generate a set of variable sum limits and four additional constraints. From the variable sum limits, initial sub-problems are constructed and solved. The optimal solution is then obtained as the best solution from all the sub-problems in terms of the objective value. The proposed procedure results in sub-problems that have reduced complexity and easier to solve than the original problem in terms of numbers of branch and bound iterations or sub-problems.The knapsack problem is a special form of the general linear integer problem. There are so many types of knapsack problems. These include the zero-one, multiple, multiple-choice, bounded, unbounded, quadratic, multi-objective, multi-dimensional, collapsing zero-one and set union knapsack problems. The zero-one knapsack problem is one in which the variables assume 0 s and 1 s only. The reason is that an item can be chosen or not chosen. In other words there is no way it is possible to have fractional amounts or items. This is the easiest class of the knapsack problems and is the only one that can be solved in polynomial by interior point algorithms and in pseudo-polynomial time by dynamic programming approaches. The multiple-choice knapsack problem is a generalization of the ordinary knapsack problem, where the set of items is partitioned into classes. The zero-one choice of taking an item is replaced by the selection of exactly one item out of each class of item

    From approximate to exact integer programming

    Full text link
    Approximate integer programming is the following: For a convex body KRnK \subseteq \mathbb{R}^n, either determine whether KZnK \cap \mathbb{Z}^n is empty, or find an integer point in the convex body scaled by 22 from its center of gravity cc. Approximate integer programming can be solved in time 2O(n)2^{O(n)} while the fastest known methods for exact integer programming run in time 2O(n)nn2^{O(n)} \cdot n^n. So far, there are no efficient methods for integer programming known that are based on approximate integer programming. Our main contribution are two such methods, each yielding novel complexity results. First, we show that an integer point x(KZn)x^* \in (K \cap \mathbb{Z}^n) can be found in time 2O(n)2^{O(n)}, provided that the remainders of each component ximodx_i^* \mod{\ell} for some arbitrarily fixed 5(n+1)\ell \geq 5(n+1) of xx^* are given. The algorithm is based on a cutting-plane technique, iteratively halving the volume of the feasible set. The cutting planes are determined via approximate integer programming. Enumeration of the possible remainders gives a 2O(n)nn2^{O(n)}n^n algorithm for general integer programming. This matches the current best bound of an algorithm by Dadush (2012) that is considerably more involved. Our algorithm also relies on a new asymmetric approximate Carath\'eodory theorem that might be of interest on its own. Our second method concerns integer programming problems in equation-standard form Ax=b,0xu,xZnAx = b, 0 \leq x \leq u, \, x \in \mathbb{Z}^n . Such a problem can be reduced to the solution of iO(logui+1)\prod_i O(\log u_i +1) approximate integer programming problems. This implies, for example that knapsack or subset-sum problems with polynomial variable range 0xip(n)0 \leq x_i \leq p(n) can be solved in time (logn)O(n)(\log n)^{O(n)}. For these problems, the best running time so far was nn2O(n)n^n \cdot 2^{O(n)}

    Nonlinear Integer Programming

    Full text link
    Research efforts of the past fifty years have led to a development of linear integer programming as a mature discipline of mathematical optimization. Such a level of maturity has not been reached when one considers nonlinear systems subject to integrality requirements for the variables. This chapter is dedicated to this topic. The primary goal is a study of a simple version of general nonlinear integer problems, where all constraints are still linear. Our focus is on the computational complexity of the problem, which varies significantly with the type of nonlinear objective function in combination with the underlying combinatorial structure. Numerous boundary cases of complexity emerge, which sometimes surprisingly lead even to polynomial time algorithms. We also cover recent successful approaches for more general classes of problems. Though no positive theoretical efficiency results are available, nor are they likely to ever be available, these seem to be the currently most successful and interesting approaches for solving practical problems. It is our belief that the study of algorithms motivated by theoretical considerations and those motivated by our desire to solve practical instances should and do inform one another. So it is with this viewpoint that we present the subject, and it is in this direction that we hope to spark further research.Comment: 57 pages. To appear in: M. J\"unger, T. Liebling, D. Naddef, G. Nemhauser, W. Pulleyblank, G. Reinelt, G. Rinaldi, and L. Wolsey (eds.), 50 Years of Integer Programming 1958--2008: The Early Years and State-of-the-Art Surveys, Springer-Verlag, 2009, ISBN 354068274

    N-fold integer programming in cubic time

    Full text link
    N-fold integer programming is a fundamental problem with a variety of natural applications in operations research and statistics. Moreover, it is universal and provides a new, variable-dimension, parametrization of all of integer programming. The fastest algorithm for nn-fold integer programming predating the present article runs in time O(ng(A)L)O(n^{g(A)}L) with LL the binary length of the numerical part of the input and g(A)g(A) the so-called Graver complexity of the bimatrix AA defining the system. In this article we provide a drastic improvement and establish an algorithm which runs in time O(n3L)O(n^3 L) having cubic dependency on nn regardless of the bimatrix AA. Our algorithm can be extended to separable convex piecewise affine objectives as well, and also to systems defined by bimatrices with variable entries. Moreover, it can be used to define a hierarchy of approximations for any integer programming problem
    corecore