4 research outputs found

    Graver basis and proximity techniques for block-structured separable convex integer minimization problems

    No full text
    We consider N-fold 4-block decomposable integer programs, which simultaneously generalize N-fold integer programs and two-stage stochastic integer programs with N scenarios. In previous work [R. Hemmecke, M. Koeppe, R. Weismantel, A polynomial-time algorithm for optimizing over N-fold 4-block decomposable integer programs, Proc. IPCO 2010, Lecture Notes in Computer Science, vol. 6080, Springer, 2010, pp. 219--229], it was proved that for fixed blocks but variable N, these integer programs are polynomial-time solvable for any linear objective. We extend this result to the minimization of separable convex objective functions. Our algorithm combines Graver basis techniques with a proximity result [D.S. Hochbaum and J.G. Shanthikumar, Convex separable optimization is not much harder than linear optimization, J. ACM 37 (1990), 843--862], which allows us to use convex continuous optimization as a subroutine

    An Algorithmic Theory of Integer Programming

    Full text link
    We study the general integer programming problem where the number of variables nn is a variable part of the input. We consider two natural parameters of the constraint matrix AA: its numeric measure aa and its sparsity measure dd. We show that integer programming can be solved in time g(a,d)poly(n,L)g(a,d)\textrm{poly}(n,L), where gg is some computable function of the parameters aa and dd, and LL is the binary encoding length of the input. In particular, integer programming is fixed-parameter tractable parameterized by aa and dd, and is solvable in polynomial time for every fixed aa and dd. Our results also extend to nonlinear separable convex objective functions. Moreover, for linear objectives, we derive a strongly-polynomial algorithm, that is, with running time g(a,d)poly(n)g(a,d)\textrm{poly}(n), independent of the rest of the input data. We obtain these results by developing an algorithmic framework based on the idea of iterative augmentation: starting from an initial feasible solution, we show how to quickly find augmenting steps which rapidly converge to an optimum. A central notion in this framework is the Graver basis of the matrix AA, which constitutes a set of fundamental augmenting steps. The iterative augmentation idea is then enhanced via the use of other techniques such as new and improved bounds on the Graver basis, rapid solution of integer programs with bounded variables, proximity theorems and a new proximity-scaling algorithm, the notion of a reduced objective function, and others. As a consequence of our work, we advance the state of the art of solving block-structured integer programs. In particular, we develop near-linear time algorithms for nn-fold, tree-fold, and 22-stage stochastic integer programs. We also discuss some of the many applications of these classes.Comment: Revision 2: - strengthened dual treedepth lower bound - simplified proximity-scaling algorith
    corecore