492 research outputs found
Tractable Optimization Problems through Hypergraph-Based Structural Restrictions
Several variants of the Constraint Satisfaction Problem have been proposed
and investigated in the literature for modelling those scenarios where
solutions are associated with some given costs. Within these frameworks
computing an optimal solution is an NP-hard problem in general; yet, when
restricted over classes of instances whose constraint interactions can be
modelled via (nearly-)acyclic graphs, this problem is known to be solvable in
polynomial time. In this paper, larger classes of tractable instances are
singled out, by discussing solution approaches based on exploiting hypergraph
acyclicity and, more generally, structural decomposition methods, such as
(hyper)tree decompositions
On Satisfiability Problems with a Linear Structure
It was recently shown \cite{STV} that satisfiability is polynomially solvable
when the incidence graph is an interval bipartite graph (an interval graph
turned into a bipartite graph by omitting all edges within each partite set).
Here we relax this condition in several directions: First, we show that it
holds for -interval bigraphs, bipartite graphs which can be converted to
interval bipartite graphs by adding to each node of one side at most edges;
the same result holds for the counting and the weighted maximization version of
satisfiability. Second, given two linear orders, one for the variables and one
for the clauses, we show how to find, in polynomial time, the smallest such
that there is a -interval bigraph compatible with these two orders. On the
negative side we prove that, barring complexity collapses, no such extensions
are possible for CSPs more general than satisfiability. We also show
NP-hardness of recognizing 1-interval bigraphs
On Percolation and -Hardness
We consider the robustness of computational hardness of problems whose input
is obtained by applying independent random deletions to worst-case instances.
For some classical -hard problems on graphs, such as Coloring,
Vertex-Cover, and Hamiltonicity, we examine the complexity of these problems
when edges (or vertices) of an arbitrary graph are deleted independently with
probability . We prove that for -vertex graphs, these problems
remain as hard as in the worst-case, as long as
for arbitrary , unless .
We also prove hardness results for Constraint Satisfaction Problems, where
random deletions are applied to clauses or variables, as well as the Subset-Sum
problem, where items of a given instance are deleted at random
Approximate MAP Estimation for Pairwise Potentials via Baker's Technique
The theoretical models providing mathematical abstractions for several
significant optimization problems in machine learning, combinatorial
optimization, computer vision and statistical physics have intrinsic
similarities. We propose a unified framework to model these computation tasks
where the structures of these optimization problems are encoded by functions
attached on the vertices and edges of a graph. We show that computing MAX 2-CSP
admits polynomial-time approximation scheme (PTAS) on planar graphs, graphs
with bounded local treewidth, -minor-free graphs, geometric graphs with
bounded density and graphs embeddable with bounded number of crossings per
edge. This implies computing MAX-CUT, MAX-DICUT and MAX -CUT admits PTASs on
all these classes of graphs. Our method also gives the first PTAS for computing
the ground state of ferromagnetic Edwards-Anderson model without external
magnetic field on -dimensional lattice graphs. These results are widely
applicable in vision, graphics and machine learning
Rounding Lasserre SDPs using column selection and spectrum-based approximation schemes for graph partitioning and Quadratic IPs
We present an approximation scheme for minimizing certain Quadratic Integer
Programming problems with positive semidefinite objective functions and global
linear constraints. This framework includes well known graph problems such as
Minimum graph bisection, Edge expansion, Sparsest Cut, and Small Set expansion,
as well as the Unique Games problem. These problems are notorious for the
existence of huge gaps between the known algorithmic results and NP-hardness
results. Our algorithm is based on rounding semidefinite programs from the
Lasserre hierarchy, and the analysis uses bounds for low-rank approximations of
a matrix in Frobenius norm using columns of the matrix.
For all the above graph problems, we give an algorithm running in time
with approximation ratio
, where is the 'th
smallest eigenvalue of the normalized graph Laplacian . In the
case of graph bisection and small set expansion, the number of vertices in the
cut is within lower-order terms of the stipulated bound. Our results imply
factor approximation in time where
is the number of eigenvalues of smaller than (for
variants of sparsest cut, also
suffices, and as is usually on interesting instances of
these problems, this requirement on is typically weaker). For Unique
Games, we give a factor approximation for
minimizing the number of unsatisfied constraints in time,
improving upon an earlier bound for solving Unique Games on expanders. We also
give an algorithm for independent sets in graphs that performs well when the
Laplacian does not have too many eigenvalues bigger than .Comment: This manuscript is a merged and definitive version of (Guruswami,
Sinop: FOCS 2011) and (Guruswami, Sinop: SODA 2013), with a significantly
revised presentation. arXiv admin note: substantial text overlap with
arXiv:1104.474
Solving constrained quadratic binary problems via quantum adiabatic evolution
Quantum adiabatic evolution is perceived as useful for binary quadratic
programming problems that are a priori unconstrained. For constrained problems,
it is a common practice to relax linear equality constraints as penalty terms
in the objective function. However, there has not yet been proposed a method
for efficiently dealing with inequality constraints using the quantum adiabatic
approach. In this paper, we give a method for solving the Lagrangian dual of a
binary quadratic programming (BQP) problem in the presence of inequality
constraints and employ this procedure within a branch-and-bound framework for
constrained BQP (CBQP) problems.Comment: 20 pages, 2 figure
Minimizing Movement: Fixed-Parameter Tractability
We study an extensive class of movement minimization problems which arise
from many practical scenarios but so far have little theoretical study. In
general, these problems involve planning the coordinated motion of a collection
of agents (representing robots, people, map labels, network messages, etc.) to
achieve a global property in the network while minimizing the maximum or
average movement (expended energy). The only previous theoretical results about
this class of problems are about approximation, and mainly negative: many
movement problems of interest have polynomial inapproximability. Given that the
number of mobile agents is typically much smaller than the complexity of the
environment, we turn to fixed-parameter tractability. We characterize the
boundary between tractable and intractable movement problems in a very general
set up: it turns out the complexity of the problem fundamentally depends on the
treewidth of the minimal configurations. Thus the complexity of a particular
problem can be determined by answering a purely combinatorial question. Using
our general tools, we determine the complexity of several concrete problems and
fortunately show that many movement problems of interest can be solved
efficiently.Comment: A preliminary version of the paper appeared in ESA 200
Beyond the Cabello-Severini-Winter framework: Making sense of contextuality without sharpness of measurements
We develop a hypergraph-theoretic framework for Spekkens contextuality
applied to Kochen-Specker (KS) type scenarios that goes beyond the
Cabello-Severini-Winter (CSW) framework. To do this, we add new
hypergraph-theoretic ingredients to the CSW framework. We then obtain
noise-robust noncontextuality inequalities in this generalized framework by
applying the assumption of (Spekkens) noncontextuality to both preparations and
measurements. The resulting framework goes beyond the CSW framework, both
conceptually and technically. On the conceptual level: 1) we relax the
assumption of outcome determinism inherent to the Kochen-Specker theorem but
retain measurement noncontextuality, besides introducing preparation
noncontextuality, 2) we do not require the exclusivity principle as a
fundamental constraint on measurement events, and 3) as a result, we do not
need to presume that measurement events of interest are "sharp", where the
notion of sharpness implies the exclusivity principle. On the technical level:
1) we introduce a source events hypergraph and define a new operational
quantity appearing in our inequalities, 2) we define a new
hypergraph invariant -- the weighted max-predictability -- that is necessary
for our analysis and appears in our inequalities, and 3) our noise-robust
noncontextuality inequalities quantify tradeoff relations between three
operational quantities -- , , and -- only one of which
(namely, ) corresponds to the Bell-Kochen-Specker functionals appearing in
the CSW framework; when , the inequalities formally reduce to CSW
type bounds on . Along the way, we also consider in detail the scope of our
framework vis-\`a-vis the CSW framework, particularly the role of Specker's
principle in the CSW framework and its absence in ours.Comment: 44 pages, 9 figures, substantial revision in response to reviewers,
new expository material on coarse-graining added in Section 2, an old claim
of saturation removed from Section 6.2 (now an open question), and two new
Appendices (A and B) added, definitive version of the paper accepted in
Quantu
On Quadratic Programming with a Ratio Objective
Quadratic Programming (QP) is the well-studied problem of maximizing over
{-1,1} values the quadratic form \sum_{i \ne j} a_{ij} x_i x_j. QP captures
many known combinatorial optimization problems, and assuming the unique games
conjecture, semidefinite programming techniques give optimal approximation
algorithms. We extend this body of work by initiating the study of Quadratic
Programming problems where the variables take values in the domain {-1,0,1}.
The specific problems we study are
QP-Ratio : \max_{\{-1,0,1\}^n} \frac{\sum_{i \not = j} a_{ij} x_i x_j}{\sum
x_i^2}, and Normalized QP-Ratio : \max_{\{-1,0,1\}^n} \frac{\sum_{i \not = j}
a_{ij} x_i x_j}{\sum d_i x_i^2}, where d_i = \sum_j |a_{ij}|
We consider an SDP relaxation obtained by adding constraints to the natural
eigenvalue (or SDP) relaxation for this problem. Using this, we obtain an
algorithm for QP-ratio. We also obtain an
approximation for bipartite graphs, and better algorithms
for special cases. As with other problems with ratio objectives (e.g. uniform
sparsest cut), it seems difficult to obtain inapproximability results based on
P!=NP. We give two results that indicate that QP-Ratio is hard to approximate
to within any constant factor. We also give a natural distribution on instances
of QP-Ratio for which an n^\epsilon approximation (for \epsilon roughly 1/10)
seems out of reach of current techniques
A sufficiently fast algorithm for finding close to optimal clique trees
AbstractWe offer an algorithm that finds a clique tree such that the size of the largest clique is at most (2α+1)k where k is the size of the largest clique in a clique tree in which this size is minimized and α is the approximation ratio of an α-approximation algorithm for the 3-way vertex cut problem. When α=4/3, our algorithm's complexity is O(24.67kn·poly(n)) and it errs by a factor of 3.67 where poly(n) is the running time of linear programming. This algorithm is extended to find clique trees in which the state space of the largest clique is bounded. When k=O(logn), our algorithm yields a polynomial inference algorithm for Bayesian networks
- …