27,730 research outputs found
Integer Programming in Parameterized Complexity: Three Miniatures
Powerful results from the theory of integer programming have recently led to substantial advances in parameterized complexity. However, our perception is that, except for Lenstra\u27s algorithm for solving integer linear programming in fixed dimension, there is still little understanding in the parameterized complexity community of the strengths and limitations of the available tools. This is understandable: it is often difficult to infer exact runtimes or even the distinction between FPT and XP algorithms, and some knowledge is simply unwritten folklore in a different community. We wish to make a step in remedying this situation.
To that end, we first provide an easy to navigate quick reference guide of integer programming algorithms from the perspective of parameterized complexity. Then, we show their applications in three case studies, obtaining FPT algorithms with runtime f(k) poly(n). We focus on:
- Modeling: since the algorithmic results follow by applying existing algorithms to new models, we shift the focus from the complexity result to the modeling result, highlighting common patterns and tricks which are used.
- Optimality program: after giving an FPT algorithm, we are interested in reducing the dependence on the parameter; we show which algorithms and tricks are often useful for speed-ups.
- Minding the poly(n): reducing f(k) often has the unintended consequence of increasing poly(n); so we highlight the common trade-offs and show how to get the best of both worlds.
Specifically, we consider graphs of bounded neighborhood diversity which are in a sense the simplest of dense graphs, and we show several FPT algorithms for Capacitated Dominating Set, Sum Coloring, and Max-q-Cut by modeling them as convex programs in fixed dimension, n-fold integer programs, bounded dual treewidth programs, and indefinite quadratic programs in fixed dimension
A polynomial-time algorithm for optimizing over N-fold 4-block decomposable integer programs
In this paper we generalize N-fold integer programs and two-stage integer
programs with N scenarios to N-fold 4-block decomposable integer programs. We
show that for fixed blocks but variable N, these integer programs are
polynomial-time solvable for any linear objective. Moreover, we present a
polynomial-time computable optimality certificate for the case of fixed blocks,
variable N and any convex separable objective function. We conclude with two
sample applications, stochastic integer programs with second-order dominance
constraints and stochastic integer multi-commodity flows, which (for fixed
blocks) can be solved in polynomial time in the number of scenarios and
commodities and in the binary encoding length of the input data. In the proof
of our main theorem we combine several non-trivial constructions from the
theory of Graver bases. We are confident that our approach paves the way for
further extensions
N-fold integer programming in cubic time
N-fold integer programming is a fundamental problem with a variety of natural
applications in operations research and statistics. Moreover, it is universal
and provides a new, variable-dimension, parametrization of all of integer
programming. The fastest algorithm for -fold integer programming predating
the present article runs in time with the binary length of
the numerical part of the input and the so-called Graver complexity of
the bimatrix defining the system. In this article we provide a drastic
improvement and establish an algorithm which runs in time having
cubic dependency on regardless of the bimatrix . Our algorithm can be
extended to separable convex piecewise affine objectives as well, and also to
systems defined by bimatrices with variable entries. Moreover, it can be used
to define a hierarchy of approximations for any integer programming problem
An Algorithmic Theory of Integer Programming
We study the general integer programming problem where the number of
variables is a variable part of the input. We consider two natural
parameters of the constraint matrix : its numeric measure and its
sparsity measure . We show that integer programming can be solved in time
, where is some computable function of the
parameters and , and is the binary encoding length of the input. In
particular, integer programming is fixed-parameter tractable parameterized by
and , and is solvable in polynomial time for every fixed and .
Our results also extend to nonlinear separable convex objective functions.
Moreover, for linear objectives, we derive a strongly-polynomial algorithm,
that is, with running time , independent of the rest of
the input data.
We obtain these results by developing an algorithmic framework based on the
idea of iterative augmentation: starting from an initial feasible solution, we
show how to quickly find augmenting steps which rapidly converge to an optimum.
A central notion in this framework is the Graver basis of the matrix , which
constitutes a set of fundamental augmenting steps. The iterative augmentation
idea is then enhanced via the use of other techniques such as new and improved
bounds on the Graver basis, rapid solution of integer programs with bounded
variables, proximity theorems and a new proximity-scaling algorithm, the notion
of a reduced objective function, and others.
As a consequence of our work, we advance the state of the art of solving
block-structured integer programs. In particular, we develop near-linear time
algorithms for -fold, tree-fold, and -stage stochastic integer programs.
We also discuss some of the many applications of these classes.Comment: Revision 2: - strengthened dual treedepth lower bound - simplified
proximity-scaling algorith
Convex Integer Optimization by Constantly Many Linear Counterparts
In this article we study convex integer maximization problems with composite
objective functions of the form , where is a convex function on
and is a matrix with small or binary entries, over
finite sets of integer points presented by an oracle or by
linear inequalities.
Continuing the line of research advanced by Uri Rothblum and his colleagues
on edge-directions, we introduce here the notion of {\em edge complexity} of
, and use it to establish polynomial and constant upper bounds on the number
of vertices of the projection \conv(WS) and on the number of linear
optimization counterparts needed to solve the above convex problem.
Two typical consequences are the following. First, for any , there is a
constant such that the maximum number of vertices of the projection of
any matroid by any binary matrix is
regardless of and ; and the convex matroid problem reduces to
greedily solvable linear counterparts. In particular, . Second, for any
, there is a constant such that the maximum number of
vertices of the projection of any three-index
transportation polytope for any by any binary
matrix is ; and the convex three-index transportation problem
reduces to linear counterparts solvable in polynomial time
- …