11 research outputs found
Integrality gaps of semidefinite programs for Vertex Cover and relations to embeddability of Negative Type metrics
We study various SDP formulations for {\sc Vertex Cover} by adding different
constraints to the standard formulation. We show that {\sc Vertex Cover} cannot
be approximated better than even when we add the so called pentagonal
inequality constraints to the standard SDP formulation, en route answering an
open question of Karakostas~\cite{Karakostas}. We further show the surprising
fact that by strengthening the SDP with the (intractable) requirement that the
metric interpretation of the solution is an metric, we get an exact
relaxation (integrality gap is 1), and on the other hand if the solution is
arbitrarily close to being embeddable, the integrality gap may be as
big as . Finally, inspired by the above findings, we use ideas from the
integrality gap construction of Charikar \cite{Char02} to provide a family of
simple examples for negative type metrics that cannot be embedded into
with distortion better than 8/7-\eps. To this end we prove a new
isoperimetric inequality for the hypercube.Comment: A more complete version. Changed order of results. A complete proof
of (current) Theorem
Integrality gaps of semidefinite programs for Vertex Cover and relations to ell embeddability of negative type metrics
We study various SDP formulations for Vertex Cover by adding different constraints to the standard formulation. We rule out approximations better than
even when we add the so-called pentagonal inequality constraints to the standard SDP formulation, and thus almost meet the
best upper bound known due to Karakostas, of
. We further show the surprising fact that by strengthening the SDP with the (intractable) requirement that the metric interpretation
of the solution embeds into ℓ1 with no distortion, we get an exact relaxation (integrality gap is 1), and on the other hand if the solution is arbitrarily
close to being ℓ1 embeddable, the integrality gap is 2 − o(1). Finally, inspired by the above findings, we use ideas from the integrality gap construction of Charikar to provide a
family of simple examples for negative type metrics that cannot be embedded into ℓ1 with distortion better than 8/7 − ε. To this end we prove a new isoperimetric inequality for the hypercube.
</div
Majority is Stablest : Discrete and SoS
The Majority is Stablest Theorem has numerous applications in hardness of
approximation and social choice theory. We give a new proof of the Majority is
Stablest Theorem by induction on the dimension of the discrete cube. Unlike the
previous proof, it uses neither the "invariance principle" nor Borell's result
in Gaussian space. The new proof is general enough to include all previous
variants of majority is stablest such as "it ain't over until it's over" and
"Majority is most predictable". Moreover, the new proof allows us to derive a
proof of Majority is Stablest in a constant level of the Sum of Squares
hierarchy.This implies in particular that Khot-Vishnoi instance of Max-Cut does
not provide a gap instance for the Lasserre hierarchy
LIPIcs, Volume 258, SoCG 2023, Complete Volume
LIPIcs, Volume 258, SoCG 2023, Complete Volum
LIPIcs, Volume 274, ESA 2023, Complete Volume
LIPIcs, Volume 274, ESA 2023, Complete Volum
Heuristics for Sparsest Cut Approximations in Network Flow Applications
The Maximum Concurrent Flow Problem (MCFP) is a polynomially bounded problem that has been used over the years in a variety of applications. Sometimes it is used to attempt to find the Sparsest Cut, an NP-hard problem, and other times to find communities in Social Network Analysis (SNA) in its hierarchical formulation, the HMCFP. Though it is polynomially bounded, the MCFP quickly grows in space utilization, rendering it useful on only small problems. When it was defined, only a few hundred nodes could be solved, where a few decades later, graphs of one to two thousand nodes can still be too much for modern commodity hardware to handle.
This dissertation covers three approaches to heuristics to the MCFP that run significantly faster in practice than the LP formulation with far less memory utilization. The first two approaches are based on the Maximum Adjacency Search (MAS) and apply to both the MCFP and the HMCFP used for community detection. We compare the three approaches to the LP performance in terms of accuracy, runtime, and memory utilization on several classes of synthetic graphs representing potential real-world applications. We find that the heuristics are often correct, and run using orders of magnitude less memory and time