104,136 research outputs found
Extended Formulation Lower Bounds via Hypergraph Coloring?
Exploring the power of linear programming for combinatorial optimization
problems has been recently receiving renewed attention after a series of
breakthrough impossibility results. From an algorithmic perspective, the
related questions concern whether there are compact formulations even for
problems that are known to admit polynomial-time algorithms.
We propose a framework for proving lower bounds on the size of extended
formulations. We do so by introducing a specific type of extended relaxations
that we call product relaxations and is motivated by the study of the
Sherali-Adams (SA) hierarchy. Then we show that for every approximate
relaxation of a polytope P, there is a product relaxation that has the same
size and is at least as strong. We provide a methodology for proving lower
bounds on the size of approximate product relaxations by lower bounding the
chromatic number of an underlying hypergraph, whose vertices correspond to
gap-inducing vectors.
We extend the definition of product relaxations and our methodology to mixed
integer sets. However in this case we are able to show that mixed product
relaxations are at least as powerful as a special family of extended
formulations. As an application of our method we show an exponential lower
bound on the size of approximate mixed product formulations for the metric
capacitated facility location problem, a problem which seems to be intractable
for linear programming as far as constant-gap compact formulations are
concerned
Lifting Linear Extension Complexity Bounds to the Mixed-Integer Setting
Mixed-integer mathematical programs are among the most commonly used models
for a wide set of problems in Operations Research and related fields. However,
there is still very little known about what can be expressed by small
mixed-integer programs. In particular, prior to this work, it was open whether
some classical problems, like the minimum odd-cut problem, can be expressed by
a compact mixed-integer program with few (even constantly many) integer
variables. This is in stark contrast to linear formulations, where recent
breakthroughs in the field of extended formulations have shown that many
polytopes associated to classical combinatorial optimization problems do not
even admit approximate extended formulations of sub-exponential size.
We provide a general framework for lifting inapproximability results of
extended formulations to the setting of mixed-integer extended formulations,
and obtain almost tight lower bounds on the number of integer variables needed
to describe a variety of classical combinatorial optimization problems. Among
the implications we obtain, we show that any mixed-integer extended formulation
of sub-exponential size for the matching polytope, cut polytope, traveling
salesman polytope or dominant of the odd-cut polytope, needs many integer variables, where is the number of vertices of the
underlying graph. Conversely, the above-mentioned polyhedra admit
polynomial-size mixed-integer formulations with only or (for the traveling salesman polytope) many integer variables.
Our results build upon a new decomposition technique that, for any convex set
, allows for approximating any mixed-integer description of by the
intersection of with the union of a small number of affine subspaces.Comment: A conference version of this paper will be presented at SODA 201
Recommended from our members
Peak response of non-linear oscillators under stationary white noise
The use of the Advanced Censored Closure (ACC) technique, recently proposed by the authors for
predicting the peak response of linear structures vibrating under random processes, is extended to
the case of non-linear oscillators driven by stationary white noise. The proposed approach requires
the knowledge of mean upcrossing rate and spectral bandwidth of the response process, which in
this paper are estimated through the Stochastic Averaging method. Numerical applications to
oscillators with non-linear stiffness and damping are included, and the results are compared with
those given by Monte Carlo Simulation and by other approximate formulations available in the literature
The matching polytope does not admit fully-polynomial size relaxation schemes
The groundbreaking work of Rothvo{\ss} [arxiv:1311.2369] established that
every linear program expressing the matching polytope has an exponential number
of inequalities (formally, the matching polytope has exponential extension
complexity). We generalize this result by deriving strong bounds on the
polyhedral inapproximability of the matching polytope: for fixed , every polyhedral -approximation
requires an exponential number of inequalities, where is the number of
vertices. This is sharp given the well-known -approximation of size
provided by the odd-sets of size up to
. Thus matching is the first problem in , whose natural
linear encoding does not admit a fully polynomial-size relaxation scheme (the
polyhedral equivalent of an FPTAS), which provides a sharp separation from the
polynomial-size relaxation scheme obtained e.g., via constant-sized odd-sets
mentioned above.
Our approach reuses ideas from Rothvo{\ss} [arxiv:1311.2369], however the
main lower bounding technique is different. While the original proof is based
on the hyperplane separation bound (also called the rectangle corruption
bound), we employ the information-theoretic notion of common information as
introduced in Braun and Pokutta [http://eccc.hpi-web.de/report/2013/056/],
which allows to analyze perturbations of slack matrices. It turns out that the
high extension complexity for the matching polytope stem from the same source
of hardness as for the correlation polytope: a direct sum structure.Comment: 21 pages, 3 figure
Approximation Limits of Linear Programs (Beyond Hierarchies)
We develop a framework for approximation limits of polynomial-size linear
programs from lower bounds on the nonnegative ranks of suitably defined
matrices. This framework yields unconditional impossibility results that are
applicable to any linear program as opposed to only programs generated by
hierarchies. Using our framework, we prove that O(n^{1/2-eps})-approximations
for CLIQUE require linear programs of size 2^{n^\Omega(eps)}. (This lower bound
applies to linear programs using a certain encoding of CLIQUE as a linear
optimization problem.) Moreover, we establish a similar result for
approximations of semidefinite programs by linear programs. Our main ingredient
is a quantitative improvement of Razborov's rectangle corruption lemma for the
high error regime, which gives strong lower bounds on the nonnegative rank of
certain perturbations of the unique disjointness matrix.Comment: 23 pages, 2 figure
Lower Bounds on the Complexity of Mixed-Integer Programs for Stable Set and Knapsack
Standard mixed-integer programming formulations for the stable set problem on
-node graphs require integer variables. We prove that this is almost
optimal: We give a family of -node graphs for which every polynomial-size
MIP formulation requires integer variables. By a
polyhedral reduction we obtain an analogous result for -item knapsack
problems. In both cases, this improves the previously known bounds of
by Cevallos, Weltge & Zenklusen (SODA 2018).
To this end, we show that there exists a family of -node graphs whose
stable set polytopes satisfy the following: any -approximate
extended formulation for these polytopes, for some constant ,
has size . Our proof extends and simplifies the
information-theoretic methods due to G\"o\"os, Jain & Watson (FOCS 2016, SIAM
J. Comput. 2018) who showed the same result for the case of exact extended
formulations (i.e. ).Comment: 35 page
Limitations of semidefinite programs for separable states and entangled games
Semidefinite programs (SDPs) are a framework for exact or approximate
optimization that have widespread application in quantum information theory. We
introduce a new method for using reductions to construct integrality gaps for
SDPs. These are based on new limitations on the sum-of-squares (SoS) hierarchy
in approximating two particularly important sets in quantum information theory,
where previously no -round integrality gaps were known: the set of
separable (i.e. unentangled) states, or equivalently, the
norm of a matrix, and the set of quantum correlations; i.e. conditional
probability distributions achievable with local measurements on a shared
entangled state. In both cases no-go theorems were previously known based on
computational assumptions such as the Exponential Time Hypothesis (ETH) which
asserts that 3-SAT requires exponential time to solve. Our unconditional
results achieve the same parameters as all of these previous results (for
separable states) or as some of the previous results (for quantum
correlations). In some cases we can make use of the framework of
Lee-Raghavendra-Steurer (LRS) to establish integrality gaps for any SDP, not
only the SoS hierarchy. Our hardness result on separable states also yields a
dimension lower bound of approximate disentanglers, answering a question of
Watrous and Aaronson et al. These results can be viewed as limitations on the
monogamy principle, the PPT test, the ability of Tsirelson-type bounds to
restrict quantum correlations, as well as the SDP hierarchies of
Doherty-Parrilo-Spedalieri, Navascues-Pironio-Acin and Berta-Fawzi-Scholz.Comment: 47 pages. v2. small changes, fixes and clarifications. published
versio
Average case polyhedral complexity of the maximum stable set problem
We study the minimum number of constraints needed to formulate random
instances of the maximum stable set problem via linear programs (LPs), in two
distinct models. In the uniform model, the constraints of the LP are not
allowed to depend on the input graph, which should be encoded solely in the
objective function. There we prove a lower bound with
probability at least for every LP that is exact for a randomly
selected set of instances; each graph on at most n vertices being selected
independently with probability . In the
non-uniform model, the constraints of the LP may depend on the input graph, but
we allow weights on the vertices. The input graph is sampled according to the
G(n, p) model. There we obtain upper and lower bounds holding with high
probability for various ranges of p. We obtain a super-polynomial lower bound
all the way from to . Our upper bound is close to this as there is only an essentially quadratic
gap in the exponent, which currently also exists in the worst-case model.
Finally, we state a conjecture that would close this gap, both in the
average-case and worst-case models
- …