34 research outputs found
On Integer Programming and Convolution
Integer programs with a constant number of constraints are solvable in pseudo-polynomial time. We give a new algorithm with a better pseudo-polynomial running time than previous results. Moreover, we establish a strong connection to the problem (min, +)-convolution. (min, +)-convolution has a trivial quadratic time algorithm and it has been conjectured that this cannot be improved significantly. We show that further improvements to our pseudo-polynomial algorithm for any fixed number of constraints are equivalent to improvements for (min, +)-convolution. This is a strong evidence that our algorithm\u27s running time is the best possible. We also present a faster specialized algorithm for testing feasibility of an integer program with few constraints and for this we also give a tight lower bound, which is based on the SETH
Proximity results and faster algorithms for Integer Programming using the Steinitz Lemma
We consider integer programming problems in standard form where , and . We show that such an integer program can be solved in time , where is an upper bound on each
absolute value of an entry in . This improves upon the longstanding best
bound of Papadimitriou (1981) of , where in addition,
the absolute values of the entries of also need to be bounded by .
Our result relies on a lemma of Steinitz that states that a set of vectors in
that is contained in the unit ball of a norm and that sum up to zero can
be ordered such that all partial sums are of norm bounded by . We also use
the Steinitz lemma to show that the -distance of an optimal integer and
fractional solution, also under the presence of upper bounds on the variables,
is bounded by . Here is again an
upper bound on the absolute values of the entries of . The novel strength of
our bound is that it is independent of . We provide evidence for the
significance of our bound by applying it to general knapsack problems where we
obtain structural and algorithmic results that improve upon the recent
literature.Comment: We achieve much milder dependence of the running time on the largest
entry in $b
The distributions of functions related to parametric integer optimization
We consider the asymptotic distribution of the IP sparsity function, which
measures the minimal support of optimal IP solutions, and the IP to LP distance
function, which measures the distance between optimal IP and LP solutions. We
create a framework for studying the asymptotic distribution of general
functions related to integer optimization. There has been a significant amount
of research focused around the extreme values that these functions can attain,
however less is known about their typical values. Each of these functions is
defined for a fixed constraint matrix and objective vector while the right hand
sides are treated as input. We show that the typical values of these functions
are smaller than the known worst case bounds by providing a spectrum of
probability-like results that govern their overall asymptotic distributions.Comment: Accepted for journal publicatio
On Integer Programming, Discrepancy, and Convolution
Integer programs with a constant number of constraints are solvable in
pseudo-polynomial time. We give a new algorithm with a better pseudo-polynomial
running time than previous results. Moreover, we establish a strong connection
to the problem (min, +)-convolution. (min, +)-convolution has a trivial
quadratic time algorithm and it has been conjectured that this cannot be
improved significantly. We show that further improvements to our
pseudo-polynomial algorithm for any fixed number of constraints are equivalent
to improvements for (min, +)-convolution. This is a strong evidence that our
algorithm's running time is the best possible. We also present a faster
specialized algorithm for testing feasibility of an integer program with few
constraints and for this we also give a tight lower bound, which is based on
the SETH.Comment: A preliminary version appeared in the proceedings of ITCS 201
On the Optimality of Pseudo-polynomial Algorithms for Integer Programming
In the classic Integer Programming (IP) problem, the objective is to decide
whether, for a given matrix and an -vector , there is a non-negative integer -vector such that . Solving
(IP) is an important step in numerous algorithms and it is important to obtain
an understanding of the precise complexity of this problem as a function of
natural parameters of the input.
The classic pseudo-polynomial time algorithm of Papadimitriou [J. ACM 1981]
for instances of (IP) with a constant number of constraints was only recently
improved upon by Eisenbrand and Weismantel [SODA 2018] and Jansen and Rohwedder
[ArXiv 2018]. We continue this line of work and show that under the Exponential
Time Hypothesis (ETH), the algorithm of Jansen and Rohwedder is nearly optimal.
We also show that when the matrix is assumed to be non-negative, a
component of Papadimitriou's original algorithm is already nearly optimal under
ETH.
This motivates us to pick up the line of research initiated by Cunningham and
Geelen [IPCO 2007] who studied the complexity of solving (IP) with non-negative
matrices in which the number of constraints may be unbounded, but the
branch-width of the column-matroid corresponding to the constraint matrix is a
constant. We prove a lower bound on the complexity of solving (IP) for such
instances and obtain optimal results with respect to a closely related
parameter, path-width. Specifically, we prove matching upper and lower bounds
for (IP) when the path-width of the corresponding column-matroid is a constant.Comment: 29 pages, To appear in ESA 201
From approximate to exact integer programming
Approximate integer programming is the following: For a convex body , either determine whether is
empty, or find an integer point in the convex body scaled by from its
center of gravity . Approximate integer programming can be solved in time
while the fastest known methods for exact integer programming run in
time . So far, there are no efficient methods for integer
programming known that are based on approximate integer programming. Our main
contribution are two such methods, each yielding novel complexity results.
First, we show that an integer point can be
found in time , provided that the remainders of each component for some arbitrarily fixed of are given.
The algorithm is based on a cutting-plane technique, iteratively halving the
volume of the feasible set. The cutting planes are determined via approximate
integer programming. Enumeration of the possible remainders gives a
algorithm for general integer programming. This matches the
current best bound of an algorithm by Dadush (2012) that is considerably more
involved. Our algorithm also relies on a new asymmetric approximate
Carath\'eodory theorem that might be of interest on its own.
Our second method concerns integer programming problems in equation-standard
form . Such a problem can be
reduced to the solution of approximate integer
programming problems. This implies, for example that knapsack or subset-sum
problems with polynomial variable range can be solved in
time . For these problems, the best running time so far was
Capacitated Dynamic Programming: Faster Knapsack and Graph Algorithms
One of the most fundamental problems in Computer Science is the Knapsack
problem. Given a set of n items with different weights and values, it asks to
pick the most valuable subset whose total weight is below a capacity threshold
T. Despite its wide applicability in various areas in Computer Science,
Operations Research, and Finance, the best known running time for the problem
is O(Tn). The main result of our work is an improved algorithm running in time
O(TD), where D is the number of distinct weights. Previously, faster runtimes
for Knapsack were only possible when both weights and values are bounded by M
and V respectively, running in time O(nMV) [Pisinger'99]. In comparison, our
algorithm implies a bound of O(nM^2) without any dependence on V, or O(nV^2)
without any dependence on M. Additionally, for the unbounded Knapsack problem,
we provide an algorithm running in time O(M^2) or O(V^2). Both our algorithms
match recent conditional lower bounds shown for the Knapsack problem [Cygan et
al'17, K\"unnemann et al'17].
We also initiate a systematic study of general capacitated dynamic
programming, of which Knapsack is a core problem. This problem asks to compute
the maximum weight path of length k in an edge- or node-weighted directed
acyclic graph. In a graph with m edges, these problems are solvable by dynamic
programming in time O(km), and we explore under which conditions the dependence
on k can be eliminated. We identify large classes of graphs where this is
possible and apply our results to obtain linear time algorithms for the problem
of k-sparse Delta-separated sequences. The main technical innovation behind our
results is identifying and exploiting concavity that appears in relaxations and
subproblems of the tasks we consider
Fast and Simple Modular Subset Sum
We revisit the Subset Sum problem over the finite cyclic group for some given integer . A series of recent works has provided asymptotically optimal algorithms for this problem under the Strong Exponential Time Hypothesis. Koiliaris and Xu (SODA'17, TALG'19) gave a deterministic algorithm running in time , which was later improved to randomized time by Axiotis et al. (SODA'19). In this work, we present two simple algorithms for the Modular Subset Sum problem running in near-linear time in , both efficiently implementing Bellman's iteration over . The first one is a randomized algorithm running in time , that is based solely on rolling hash and an elementary data-structure for prefix sums; to illustrate its simplicity we provide a short and efficient implementation of the algorithm in Python. Our second solution is a deterministic algorithm running in time , that uses dynamic data structures for string manipulation. We further show that the techniques developed in this work can also lead to simple algorithms for the All Pairs Non-Decreasing Paths Problem (APNP) on undirected graphs, matching the asymptotically optimal running time of provided in the recent work of Duan et al. (ICALP'19)