492 research outputs found

    Tractable Optimization Problems through Hypergraph-Based Structural Restrictions

    Full text link
    Several variants of the Constraint Satisfaction Problem have been proposed and investigated in the literature for modelling those scenarios where solutions are associated with some given costs. Within these frameworks computing an optimal solution is an NP-hard problem in general; yet, when restricted over classes of instances whose constraint interactions can be modelled via (nearly-)acyclic graphs, this problem is known to be solvable in polynomial time. In this paper, larger classes of tractable instances are singled out, by discussing solution approaches based on exploiting hypergraph acyclicity and, more generally, structural decomposition methods, such as (hyper)tree decompositions

    On Satisfiability Problems with a Linear Structure

    Full text link
    It was recently shown \cite{STV} that satisfiability is polynomially solvable when the incidence graph is an interval bipartite graph (an interval graph turned into a bipartite graph by omitting all edges within each partite set). Here we relax this condition in several directions: First, we show that it holds for kk-interval bigraphs, bipartite graphs which can be converted to interval bipartite graphs by adding to each node of one side at most kk edges; the same result holds for the counting and the weighted maximization version of satisfiability. Second, given two linear orders, one for the variables and one for the clauses, we show how to find, in polynomial time, the smallest kk such that there is a kk-interval bigraph compatible with these two orders. On the negative side we prove that, barring complexity collapses, no such extensions are possible for CSPs more general than satisfiability. We also show NP-hardness of recognizing 1-interval bigraphs

    On Percolation and NPNP-Hardness

    Full text link
    We consider the robustness of computational hardness of problems whose input is obtained by applying independent random deletions to worst-case instances. For some classical NPNP-hard problems on graphs, such as Coloring, Vertex-Cover, and Hamiltonicity, we examine the complexity of these problems when edges (or vertices) of an arbitrary graph are deleted independently with probability 1p>01-p > 0. We prove that for nn-vertex graphs, these problems remain as hard as in the worst-case, as long as p>1n1ϵp > \frac{1}{n^{1-\epsilon}} for arbitrary ϵ(0,1)\epsilon \in (0,1), unless NPBPPNP \subseteq BPP. We also prove hardness results for Constraint Satisfaction Problems, where random deletions are applied to clauses or variables, as well as the Subset-Sum problem, where items of a given instance are deleted at random

    Approximate MAP Estimation for Pairwise Potentials via Baker's Technique

    Full text link
    The theoretical models providing mathematical abstractions for several significant optimization problems in machine learning, combinatorial optimization, computer vision and statistical physics have intrinsic similarities. We propose a unified framework to model these computation tasks where the structures of these optimization problems are encoded by functions attached on the vertices and edges of a graph. We show that computing MAX 2-CSP admits polynomial-time approximation scheme (PTAS) on planar graphs, graphs with bounded local treewidth, HH-minor-free graphs, geometric graphs with bounded density and graphs embeddable with bounded number of crossings per edge. This implies computing MAX-CUT, MAX-DICUT and MAX kk-CUT admits PTASs on all these classes of graphs. Our method also gives the first PTAS for computing the ground state of ferromagnetic Edwards-Anderson model without external magnetic field on dd-dimensional lattice graphs. These results are widely applicable in vision, graphics and machine learning

    Rounding Lasserre SDPs using column selection and spectrum-based approximation schemes for graph partitioning and Quadratic IPs

    Full text link
    We present an approximation scheme for minimizing certain Quadratic Integer Programming problems with positive semidefinite objective functions and global linear constraints. This framework includes well known graph problems such as Minimum graph bisection, Edge expansion, Sparsest Cut, and Small Set expansion, as well as the Unique Games problem. These problems are notorious for the existence of huge gaps between the known algorithmic results and NP-hardness results. Our algorithm is based on rounding semidefinite programs from the Lasserre hierarchy, and the analysis uses bounds for low-rank approximations of a matrix in Frobenius norm using columns of the matrix. For all the above graph problems, we give an algorithm running in time nO(r/ϵ2)n^{O(r/\epsilon^2)} with approximation ratio 1+ϵmin{1,λr}\frac{1+\epsilon}{\min\{1,\lambda_r\}}, where λr\lambda_r is the rr'th smallest eigenvalue of the normalized graph Laplacian L\mathcal{L}. In the case of graph bisection and small set expansion, the number of vertices in the cut is within lower-order terms of the stipulated bound. Our results imply (1+O(ϵ))(1+O(\epsilon)) factor approximation in time nO(r/ϵ2)n^{O(r^\ast/\epsilon^2)} where is the number of eigenvalues of L\mathcal{L} smaller than 1ϵ1-\epsilon (for variants of sparsest cut, λrOPT/ϵ\lambda_{r^\ast} \ge \mathrm{OPT}/\epsilon also suffices, and as OPT\mathrm{OPT} is usually o(1)o(1) on interesting instances of these problems, this requirement on rr^\ast is typically weaker). For Unique Games, we give a factor (1+2+ϵλr)(1+\frac{2+\epsilon}{\lambda_r}) approximation for minimizing the number of unsatisfied constraints in nO(r/ϵ)n^{O(r/\epsilon)} time, improving upon an earlier bound for solving Unique Games on expanders. We also give an algorithm for independent sets in graphs that performs well when the Laplacian does not have too many eigenvalues bigger than 1+o(1)1+o(1).Comment: This manuscript is a merged and definitive version of (Guruswami, Sinop: FOCS 2011) and (Guruswami, Sinop: SODA 2013), with a significantly revised presentation. arXiv admin note: substantial text overlap with arXiv:1104.474

    Solving constrained quadratic binary problems via quantum adiabatic evolution

    Full text link
    Quantum adiabatic evolution is perceived as useful for binary quadratic programming problems that are a priori unconstrained. For constrained problems, it is a common practice to relax linear equality constraints as penalty terms in the objective function. However, there has not yet been proposed a method for efficiently dealing with inequality constraints using the quantum adiabatic approach. In this paper, we give a method for solving the Lagrangian dual of a binary quadratic programming (BQP) problem in the presence of inequality constraints and employ this procedure within a branch-and-bound framework for constrained BQP (CBQP) problems.Comment: 20 pages, 2 figure

    Minimizing Movement: Fixed-Parameter Tractability

    Full text link
    We study an extensive class of movement minimization problems which arise from many practical scenarios but so far have little theoretical study. In general, these problems involve planning the coordinated motion of a collection of agents (representing robots, people, map labels, network messages, etc.) to achieve a global property in the network while minimizing the maximum or average movement (expended energy). The only previous theoretical results about this class of problems are about approximation, and mainly negative: many movement problems of interest have polynomial inapproximability. Given that the number of mobile agents is typically much smaller than the complexity of the environment, we turn to fixed-parameter tractability. We characterize the boundary between tractable and intractable movement problems in a very general set up: it turns out the complexity of the problem fundamentally depends on the treewidth of the minimal configurations. Thus the complexity of a particular problem can be determined by answering a purely combinatorial question. Using our general tools, we determine the complexity of several concrete problems and fortunately show that many movement problems of interest can be solved efficiently.Comment: A preliminary version of the paper appeared in ESA 200

    Beyond the Cabello-Severini-Winter framework: Making sense of contextuality without sharpness of measurements

    Full text link
    We develop a hypergraph-theoretic framework for Spekkens contextuality applied to Kochen-Specker (KS) type scenarios that goes beyond the Cabello-Severini-Winter (CSW) framework. To do this, we add new hypergraph-theoretic ingredients to the CSW framework. We then obtain noise-robust noncontextuality inequalities in this generalized framework by applying the assumption of (Spekkens) noncontextuality to both preparations and measurements. The resulting framework goes beyond the CSW framework, both conceptually and technically. On the conceptual level: 1) we relax the assumption of outcome determinism inherent to the Kochen-Specker theorem but retain measurement noncontextuality, besides introducing preparation noncontextuality, 2) we do not require the exclusivity principle as a fundamental constraint on measurement events, and 3) as a result, we do not need to presume that measurement events of interest are "sharp", where the notion of sharpness implies the exclusivity principle. On the technical level: 1) we introduce a source events hypergraph and define a new operational quantity Corr{\rm Corr} appearing in our inequalities, 2) we define a new hypergraph invariant -- the weighted max-predictability -- that is necessary for our analysis and appears in our inequalities, and 3) our noise-robust noncontextuality inequalities quantify tradeoff relations between three operational quantities -- Corr{\rm Corr}, RR, and p0p_0 -- only one of which (namely, RR) corresponds to the Bell-Kochen-Specker functionals appearing in the CSW framework; when Corr=1{\rm Corr}=1, the inequalities formally reduce to CSW type bounds on RR. Along the way, we also consider in detail the scope of our framework vis-\`a-vis the CSW framework, particularly the role of Specker's principle in the CSW framework and its absence in ours.Comment: 44 pages, 9 figures, substantial revision in response to reviewers, new expository material on coarse-graining added in Section 2, an old claim of saturation removed from Section 6.2 (now an open question), and two new Appendices (A and B) added, definitive version of the paper accepted in Quantu

    On Quadratic Programming with a Ratio Objective

    Full text link
    Quadratic Programming (QP) is the well-studied problem of maximizing over {-1,1} values the quadratic form \sum_{i \ne j} a_{ij} x_i x_j. QP captures many known combinatorial optimization problems, and assuming the unique games conjecture, semidefinite programming techniques give optimal approximation algorithms. We extend this body of work by initiating the study of Quadratic Programming problems where the variables take values in the domain {-1,0,1}. The specific problems we study are QP-Ratio : \max_{\{-1,0,1\}^n} \frac{\sum_{i \not = j} a_{ij} x_i x_j}{\sum x_i^2}, and Normalized QP-Ratio : \max_{\{-1,0,1\}^n} \frac{\sum_{i \not = j} a_{ij} x_i x_j}{\sum d_i x_i^2}, where d_i = \sum_j |a_{ij}| We consider an SDP relaxation obtained by adding constraints to the natural eigenvalue (or SDP) relaxation for this problem. Using this, we obtain an O~(n1/3)\tilde{O}(n^{1/3}) algorithm for QP-ratio. We also obtain an O~(n1/4)\tilde{O}(n^{1/4}) approximation for bipartite graphs, and better algorithms for special cases. As with other problems with ratio objectives (e.g. uniform sparsest cut), it seems difficult to obtain inapproximability results based on P!=NP. We give two results that indicate that QP-Ratio is hard to approximate to within any constant factor. We also give a natural distribution on instances of QP-Ratio for which an n^\epsilon approximation (for \epsilon roughly 1/10) seems out of reach of current techniques

    A sufficiently fast algorithm for finding close to optimal clique trees

    Get PDF
    AbstractWe offer an algorithm that finds a clique tree such that the size of the largest clique is at most (2α+1)k where k is the size of the largest clique in a clique tree in which this size is minimized and α is the approximation ratio of an α-approximation algorithm for the 3-way vertex cut problem. When α=4/3, our algorithm's complexity is O(24.67kn·poly(n)) and it errs by a factor of 3.67 where poly(n) is the running time of linear programming. This algorithm is extended to find clique trees in which the state space of the largest clique is bounded. When k=O(logn), our algorithm yields a polynomial inference algorithm for Bayesian networks
    corecore