969 research outputs found

    Mapping constrained optimization problems to quantum annealing with application to fault diagnosis

    Get PDF
    Current quantum annealing (QA) hardware suffers from practical limitations such as finite temperature, sparse connectivity, small qubit numbers, and control error. We propose new algorithms for mapping boolean constraint satisfaction problems (CSPs) onto QA hardware mitigating these limitations. In particular we develop a new embedding algorithm for mapping a CSP onto a hardware Ising model with a fixed sparse set of interactions, and propose two new decomposition algorithms for solving problems too large to map directly into hardware. The mapping technique is locally-structured, as hardware compatible Ising models are generated for each problem constraint, and variables appearing in different constraints are chained together using ferromagnetic couplings. In contrast, global embedding techniques generate a hardware independent Ising model for all the constraints, and then use a minor-embedding algorithm to generate a hardware compatible Ising model. We give an example of a class of CSPs for which the scaling performance of D-Wave's QA hardware using the local mapping technique is significantly better than global embedding. We validate the approach by applying D-Wave's hardware to circuit-based fault-diagnosis. For circuits that embed directly, we find that the hardware is typically able to find all solutions from a min-fault diagnosis set of size N using 1000N samples, using an annealing rate that is 25 times faster than a leading SAT-based sampling method. Further, we apply decomposition algorithms to find min-cardinality faults for circuits that are up to 5 times larger than can be solved directly on current hardware.Comment: 22 pages, 4 figure

    Lower Bounds on Query Complexity for Testing Bounded-Degree CSPs

    Full text link
    In this paper, we consider lower bounds on the query complexity for testing CSPs in the bounded-degree model. First, for any ``symmetric'' predicate P:0,1k→0,1P:{0,1}^{k} \to {0,1} except \equ where k≥3k\geq 3, we show that every (randomized) algorithm that distinguishes satisfiable instances of CSP(P) from instances (∣P−1(0)∣/2k−ϵ)(|P^{-1}(0)|/2^k-\epsilon)-far from satisfiability requires Ω(n1/2+δ)\Omega(n^{1/2+\delta}) queries where nn is the number of variables and δ>0\delta>0 is a constant that depends on PP and ϵ\epsilon. This breaks a natural lower bound Ω(n1/2)\Omega(n^{1/2}), which is obtained by the birthday paradox. We also show that every one-sided error tester requires Ω(n)\Omega(n) queries for such PP. These results are hereditary in the sense that the same results hold for any predicate QQ such that P−1(1)⊆Q−1(1)P^{-1}(1) \subseteq Q^{-1}(1). For EQU, we give a one-sided error tester whose query complexity is O~(n1/2)\tilde{O}(n^{1/2}). Also, for 2-XOR (or, equivalently E2LIN2), we show an Ω(n1/2+δ)\Omega(n^{1/2+\delta}) lower bound for distinguishing instances between ϵ\epsilon-close to and (1/2−ϵ)(1/2-\epsilon)-far from satisfiability. Next, for the general k-CSP over the binary domain, we show that every algorithm that distinguishes satisfiable instances from instances (1−2k/2k−ϵ)(1-2k/2^k-\epsilon)-far from satisfiability requires Ω(n)\Omega(n) queries. The matching NP-hardness is not known, even assuming the Unique Games Conjecture or the dd-to-11 Conjecture. As a corollary, for Maximum Independent Set on graphs with nn vertices and a degree bound dd, we show that every approximation algorithm within a factor d/\poly\log d and an additive error of ϵn\epsilon n requires Ω(n)\Omega(n) queries. Previously, only super-constant lower bounds were known

    Subsampled Power Iteration: a Unified Algorithm for Block Models and Planted CSP's

    Get PDF
    We present an algorithm for recovering planted solutions in two well-known models, the stochastic block model and planted constraint satisfaction problems, via a common generalization in terms of random bipartite graphs. Our algorithm matches up to a constant factor the best-known bounds for the number of edges (or constraints) needed for perfect recovery and its running time is linear in the number of edges used. The time complexity is significantly better than both spectral and SDP-based approaches. The main contribution of the algorithm is in the case of unequal sizes in the bipartition (corresponding to odd uniformity in the CSP). Here our algorithm succeeds at a significantly lower density than the spectral approaches, surpassing a barrier based on the spectral norm of a random matrix. Other significant features of the algorithm and analysis include (i) the critical use of power iteration with subsampling, which might be of independent interest; its analysis requires keeping track of multiple norms of an evolving solution (ii) it can be implemented statistically, i.e., with very limited access to the input distribution (iii) the algorithm is extremely simple to implement and runs in linear time, and thus is practical even for very large instances

    Subsampling Mathematical Relaxations and Average-case Complexity

    Full text link
    We initiate a study of when the value of mathematical relaxations such as linear and semidefinite programs for constraint satisfaction problems (CSPs) is approximately preserved when restricting the instance to a sub-instance induced by a small random subsample of the variables. Let CC be a family of CSPs such as 3SAT, Max-Cut, etc., and let Π\Pi be a relaxation for CC, in the sense that for every instance P∈CP\in C, Π(P)\Pi(P) is an upper bound the maximum fraction of satisfiable constraints of PP. Loosely speaking, we say that subsampling holds for CC and Π\Pi if for every sufficiently dense instance P∈CP \in C and every ϵ>0\epsilon>0, if we let P′P' be the instance obtained by restricting PP to a sufficiently large constant number of variables, then Π(P′)∈(1±ϵ)Π(P)\Pi(P') \in (1\pm \epsilon)\Pi(P). We say that weak subsampling holds if the above guarantee is replaced with Π(P′)=1−Θ(γ)\Pi(P')=1-\Theta(\gamma) whenever Π(P)=1−γ\Pi(P)=1-\gamma. We show: 1. Subsampling holds for the BasicLP and BasicSDP programs. BasicSDP is a variant of the relaxation considered by Raghavendra (2008), who showed it gives an optimal approximation factor for every CSP under the unique games conjecture. BasicLP is the linear programming analog of BasicSDP. 2. For tighter versions of BasicSDP obtained by adding additional constraints from the Lasserre hierarchy, weak subsampling holds for CSPs of unique games type. 3. There are non-unique CSPs for which even weak subsampling fails for the above tighter semidefinite programs. Also there are unique CSPs for which subsampling fails for the Sherali-Adams linear programming hierarchy. As a corollary of our weak subsampling for strong semidefinite programs, we obtain a polynomial-time algorithm to certify that random geometric graphs (of the type considered by Feige and Schechtman, 2002) of max-cut value 1−γ1-\gamma have a cut value at most 1−γ/101-\gamma/10.Comment: Includes several more general results that subsume the previous version of the paper

    Sub-exponential Approximation Schemes for CSPs: From Dense to Almost Sparse

    Get PDF
    It has long been known, since the classical work of (Arora, Karger, Karpinski, JCSS\u2799), that MAX-CUT admits a PTAS on dense graphs, and more generally, MAX-k-CSP admits a PTAS on "dense" instances with Omega(n^k) constraints. In this paper we extend and generalize their exhaustive sampling approach, presenting a framework for (1-epsilon)-approximating any MAX-k-CSP problem in sub-exponential time while significantly relaxing the denseness requirement on the input instance. Specifically, we prove that for any constants delta in (0, 1] and epsilon > 0, we can approximate MAX-k-CSP problems with Omega(n^{k-1+delta}) constraints within a factor of (1-epsilon) in time 2^{O(n^{1-delta}*ln(n) / epsilon^3)}. The framework is quite general and includes classical optimization problems, such as MAX-CUT, MAX-DICUT, MAX-k-SAT, and (with a slight extension) k-DENSEST SUBGRAPH, as special cases. For MAX-CUT in particular (where k=2), it gives an approximation scheme that runs in time sub-exponential in n even for "almost-sparse" instances (graphs with n^{1+delta} edges). We prove that our results are essentially best possible, assuming the ETH. First, the density requirement cannot be relaxed further: there exists a constant r 0, MAX-k-SAT instances with O(n^{k-1}) clauses cannot be approximated within a ratio better than r in time 2^{O(n^{1-delta})}. Second, the running time of our algorithm is almost tight for all densities. Even for MAX-CUT there exists r delta >0, MAX-CUT instances with n^{1+delta} edges cannot be approximated within a ratio better than r in time 2^{n^{1-delta\u27}}

    Hardness of Graph Pricing through Generalized Max-Dicut

    Full text link
    The Graph Pricing problem is among the fundamental problems whose approximability is not well-understood. While there is a simple combinatorial 1/4-approximation algorithm, the best hardness result remains at 1/2 assuming the Unique Games Conjecture (UGC). We show that it is NP-hard to approximate within a factor better than 1/4 under the UGC, so that the simple combinatorial algorithm might be the best possible. We also prove that for any ϵ>0\epsilon > 0, there exists δ>0\delta > 0 such that the integrality gap of nδn^{\delta}-rounds of the Sherali-Adams hierarchy of linear programming for Graph Pricing is at most 1/2 + ϵ\epsilon. This work is based on the effort to view the Graph Pricing problem as a Constraint Satisfaction Problem (CSP) simpler than the standard and complicated formulation. We propose the problem called Generalized Max-Dicut(TT), which has a domain size T+1T + 1 for every T≥1T \geq 1. Generalized Max-Dicut(1) is well-known Max-Dicut. There is an approximation-preserving reduction from Generalized Max-Dicut on directed acyclic graphs (DAGs) to Graph Pricing, and both our results are achieved through this reduction. Besides its connection to Graph Pricing, the hardness of Generalized Max-Dicut is interesting in its own right since in most arity two CSPs studied in the literature, SDP-based algorithms perform better than LP-based or combinatorial algorithms --- for this arity two CSP, a simple combinatorial algorithm does the best.Comment: 28 page

    On the Usefulness of Predicates

    Full text link
    Motivated by the pervasiveness of strong inapproximability results for Max-CSPs, we introduce a relaxed notion of an approximate solution of a Max-CSP. In this relaxed version, loosely speaking, the algorithm is allowed to replace the constraints of an instance by some other (possibly real-valued) constraints, and then only needs to satisfy as many of the new constraints as possible. To be more precise, we introduce the following notion of a predicate PP being \emph{useful} for a (real-valued) objective QQ: given an almost satisfiable Max-PP instance, there is an algorithm that beats a random assignment on the corresponding Max-QQ instance applied to the same sets of literals. The standard notion of a nontrivial approximation algorithm for a Max-CSP with predicate PP is exactly the same as saying that PP is useful for PP itself. We say that PP is useless if it is not useful for any QQ. This turns out to be equivalent to the following pseudo-randomness property: given an almost satisfiable instance of Max-PP it is hard to find an assignment such that the induced distribution on kk-bit strings defined by the instance is not essentially uniform. Under the Unique Games Conjecture, we give a complete and simple characterization of useful Max-CSPs defined by a predicate: such a Max-CSP is useless if and only if there is a pairwise independent distribution supported on the satisfying assignments of the predicate. It is natural to also consider the case when no negations are allowed in the CSP instance, and we derive a similar complete characterization (under the UGC) there as well. Finally, we also include some results and examples shedding additional light on the approximability of certain Max-CSPs
    • …
    corecore