81 research outputs found

    On the random satisfiable process

    Full text link
    In this work we suggest a new model for generating random satisfiable k-CNF formulas. To generate such formulas -- randomly permute all 2^k\binom{n}{k} possible clauses over the variables x_1, ..., x_n, and starting from the empty formula, go over the clauses one by one, including each new clause as you go along if after its addition the formula remains satisfiable. We study the evolution of this process, namely the distribution over formulas obtained after scanning through the first m clauses (in the random permutation's order). Random processes with conditioning on a certain property being respected are widely studied in the context of graph properties. This study was pioneered by Ruci\'nski and Wormald in 1992 for graphs with a fixed degree sequence, and also by Erd\H{o}s, Suen, and Winkler in 1995 for triangle-free and bipartite graphs. Since then many other graph properties were studied such as planarity and H-freeness. Thus our model is a natural extension of this approach to the satisfiability setting. Our main contribution is as follows. For m \geq cn, c=c(k) a sufficiently large constant, we are able to characterize the structure of the solution space of a typical formula in this distribution. Specifically, we show that typically all satisfying assignments are essentially clustered in one cluster, and all but e^{-\Omega(m/n)} n of the variables take the same value in all satisfying assignments. We also describe a polynomial time algorithm that finds with high probability a satisfying assignment for such formulas

    Combinatorial optimization problems in self-assembly

    Get PDF
    Self-assembly is the ubiquitous process by which simple objects autonomously assemble into intricate complexes. It has been suggested that intricate self-assembly processes will ultimately be used in circuit fabrication, nano-robotics, DNA computation, and amorphous computing. In this paper, we study two combinatorial optimization problems related to efficient self-assembly of shapes in the Tile Assembly Model of self-assembly proposed by Rothemund and Winfree [18]. The first is the Minimum Tile Set Problem, where the goal is to find the smallest tile system that uniquely produces a given shape. The second is the Tile Concentrations Problem, where the goal is to decide on the relative concentrations of different types of tiles so that a tile system assembles as quickly as possible. The first problem is akin to finding optimum program size, and the second to finding optimum running time for a "program" to assemble the shape.Self-assembly is the ubiquitous process by which simple objects autonomously assemble into intricate complexes. It has been suggested that intricate self-assembly processes will ultimately be used in circuit fabrication, nano-robotics, DNA computation, and amorphous computing. In this paper, we study two combinatorial optimization problems related to efficient self-assembly of shapes in the Tile Assembly Model of self-assembly proposed by Rothemund and Winfree [18]. The first is the Minimum Tile Set Problem, where the goal is to find the smallest tile system that uniquely produces a given shape. The second is the Tile Concentrations Problem, where the goal is to decide on the relative concentrations of different types of tiles so that a tile system assembles as quickly as possible. The first problem is akin to finding optimum program size, and the second to finding optimum running time for a "program" to assemble the shape. We prove that the first problem is NP-complete in general, and polynomial time solvable on trees and squares. In order to prove that the problem is in NP, we present a polynomial time algorithm to verify whether a given tile system uniquely produces a given shape. This algorithm is analogous to a program verifier for traditional computational systems, and may well be of independent interest. For the second problem, we present a polynomial time O(logn)O(\log n)-approximation algorithm that works for a large class of tile systems that we call partial order systems

    Optimal Testing for Planted Satisfiability Problems

    Get PDF
    We study the problem of detecting planted solutions in a random satisfiability formula. Adopting the formalism of hypothesis testing in statistical analysis, we describe the minimax optimal rates of detection. Our analysis relies on the study of the number of satisfying assignments, for which we prove new results. We also address algorithmic issues, and give a computationally efficient test with optimal statistical performance. This result is compared to an average-case hypothesis on the hardness of refuting satisfiability of random formulas

    Sparser Random 3SAT Refutation Algorithms and the Interpolation Problem:Extended Abstract

    Get PDF
    We formalize a combinatorial principle, called the 3XOR principle, due to Feige, Kim and Ofek [12], as a family of unsatisfiable propositional formulas for which refutations of small size in any propo-sitional proof system that possesses the feasible interpolation property imply an efficient deterministic refutation algorithm for random 3SAT with n variables and Ω(n1.4) clauses. Such small size refutations would improve the state of the art (with respect to the clause density) efficient refutation algorithm, which works only for Ω(n1.5) many clauses [13]. We demonstrate polynomial-size refutations of the 3XOR principle in resolution operating with disjunctions of quadratic equations with small integer coefficients, denoted R(quad); this is a weak extension of cutting planes with small coefficients. We show that R(quad) is weakly autom-atizable iff R(lin) is weakly automatizable, where R(lin) is similar to R(quad) but with linear instead of quadratic equations (introduced in [25]). This reduces the problem of refuting random 3CNF with n vari-ables and Ω(n1.4) clauses to the interpolation problem of R(quad) and to the weak automatizability of R(lin)

    Average-Case Complexity

    Full text link
    We survey the average-case complexity of problems in NP. We discuss various notions of good-on-average algorithms, and present completeness results due to Impagliazzo and Levin. Such completeness results establish the fact that if a certain specific (but somewhat artificial) NP problem is easy-on-average with respect to the uniform distribution, then all problems in NP are easy-on-average with respect to all samplable distributions. Applying the theory to natural distributional problems remain an outstanding open question. We review some natural distributional problems whose average-case complexity is of particular interest and that do not yet fit into this theory. A major open question whether the existence of hard-on-average problems in NP can be based on the P\neqNP assumption or on related worst-case assumptions. We review negative results showing that certain proof techniques cannot prove such a result. While the relation between worst-case and average-case complexity for general NP problems remains open, there has been progress in understanding the relation between different ``degrees'' of average-case complexity. We discuss some of these ``hardness amplification'' results

    Distributed Reasoning in a Peer-to-Peer Setting: Application to the Semantic Web

    Full text link
    In a peer-to-peer inference system, each peer can reason locally but can also solicit some of its acquaintances, which are peers sharing part of its vocabulary. In this paper, we consider peer-to-peer inference systems in which the local theory of each peer is a set of propositional clauses defined upon a local vocabulary. An important characteristic of peer-to-peer inference systems is that the global theory (the union of all peer theories) is not known (as opposed to partition-based reasoning systems). The main contribution of this paper is to provide the first consequence finding algorithm in a peer-to-peer setting: DeCA. It is anytime and computes consequences gradually from the solicited peer to peers that are more and more distant. We exhibit a sufficient condition on the acquaintance graph of the peer-to-peer inference system for guaranteeing the completeness of this algorithm. Another important contribution is to apply this general distributed reasoning setting to the setting of the Semantic Web through the Somewhere semantic peer-to-peer data management system. The last contribution of this paper is to provide an experimental analysis of the scalability of the peer-to-peer infrastructure that we propose, on large networks of 1000 peers

    Smooth and Strong PCPs

    Get PDF
    Probabilistically checkable proofs (PCPs) can be verified based only on a constant amount of random queries, such that any correct claim has a proof that is always accepted, and incorrect claims are rejected with high probability (regardless of the given alleged proof). We consider two possible features of PCPs: - A PCP is strong if it rejects an alleged proof of a correct claim with probability proportional to its distance from some correct proof of that claim. - A PCP is smooth if each location in a proof is queried with equal probability. We prove that all sets in NP have PCPs that are both smooth and strong, are of polynomial length, and can be verified based on a constant number of queries. This is achieved by following the proof of the PCP theorem of Arora, Lund, Motwani, Sudan and Szegedy (JACM, 1998), providing a stronger analysis of the Hadamard and Reed - Muller based PCPs and a refined PCP composition theorem. In fact, we show that any set in NP has a smooth strong canonical PCP of Proximity (PCPP), meaning that there is an efficiently computable bijection of NP witnesses to correct proofs. This improves on the recent construction of Dinur, Gur and Goldreich (ITCS, 2019) of PCPPs that are strong canonical but inherently non-smooth. Our result implies the hardness of approximating the satisfiability of "stable" 3CNF formulae with bounded variable occurrence, where stable means that the number of clauses violated by an assignment is proportional to its distance from a satisfying assignment (in the relative Hamming metric). This proves a hypothesis used in the work of Friggstad, Khodamoradi and Salavatipour (SODA, 2019), suggesting a connection between the hardness of these instances and other stable optimization problems
    corecore