19 research outputs found
Optimal Testing for Planted Satisfiability Problems
We study the problem of detecting planted solutions in a random
satisfiability formula. Adopting the formalism of hypothesis testing in
statistical analysis, we describe the minimax optimal rates of detection. Our
analysis relies on the study of the number of satisfying assignments, for which
we prove new results. We also address algorithmic issues, and give a
computationally efficient test with optimal statistical performance. This
result is compared to an average-case hypothesis on the hardness of refuting
satisfiability of random formulas
Analysing Survey Propagation Guided Decimationon Random Formulas
Let be a uniformly distributed random -SAT formula with
variables and clauses. For clauses/variables ratio the formula is satisfiable with high
probability. However, no efficient algorithm is known to provably find a
satisfying assignment beyond with a non-vanishing
probability. Non-rigorous statistical mechanics work on -CNF led to the
development of a new efficient "message passing algorithm" called \emph{Survey
Propagation Guided Decimation} [M\'ezard et al., Science 2002]. Experiments
conducted for suggest that the algorithm finds satisfying assignments
close to . However, in the present paper we prove that the
basic version of Survey Propagation Guided Decimation fails to solve random
-SAT formulas efficiently already for
with almost a factor below
.Comment: arXiv admin note: substantial text overlap with arXiv:1007.1328 by
other author
The decimation process in random k-SAT
Let F be a uniformly distributed random k-SAT formula with n variables and m
clauses. Non-rigorous statistical mechanics ideas have inspired a message
passing algorithm called Belief Propagation Guided Decimation for finding
satisfying assignments of F. This algorithm can be viewed as an attempt at
implementing a certain thought experiment that we call the Decimation Process.
In this paper we identify a variety of phase transitions in the decimation
process and link these phase transitions to the performance of the algorithm
Approaching the Rate-Distortion Limit with Spatial Coupling, Belief propagation and Decimation
We investigate an encoding scheme for lossy compression of a binary symmetric
source based on simple spatially coupled Low-Density Generator-Matrix codes.
The degree of the check nodes is regular and the one of code-bits is Poisson
distributed with an average depending on the compression rate. The performance
of a low complexity Belief Propagation Guided Decimation algorithm is
excellent. The algorithmic rate-distortion curve approaches the optimal curve
of the ensemble as the width of the coupling window grows. Moreover, as the
check degree grows both curves approach the ultimate Shannon rate-distortion
limit. The Belief Propagation Guided Decimation encoder is based on the
posterior measure of a binary symmetric test-channel. This measure can be
interpreted as a random Gibbs measure at a "temperature" directly related to
the "noise level of the test-channel". We investigate the links between the
algorithmic performance of the Belief Propagation Guided Decimation encoder and
the phase diagram of this Gibbs measure. The phase diagram is investigated
thanks to the cavity method of spin glass theory which predicts a number of
phase transition thresholds. In particular the dynamical and condensation
"phase transition temperatures" (equivalently test-channel noise thresholds)
are computed. We observe that: (i) the dynamical temperature of the spatially
coupled construction saturates towards the condensation temperature; (ii) for
large degrees the condensation temperature approaches the temperature (i.e.
noise level) related to the information theoretic Shannon test-channel noise
parameter of rate-distortion theory. This provides heuristic insight into the
excellent performance of the Belief Propagation Guided Decimation algorithm.
The paper contains an introduction to the cavity method
The condensation phase transition in random graph coloring
Based on a non-rigorous formalism called the "cavity method", physicists have
put forward intriguing predictions on phase transitions in discrete structures.
One of the most remarkable ones is that in problems such as random -SAT or
random graph -coloring, very shortly before the threshold for the existence
of solutions there occurs another phase transition called "condensation"
[Krzakala et al., PNAS 2007]. The existence of this phase transition appears to
be intimately related to the difficulty of proving precise results on, e.g.,
the -colorability threshold as well as to the performance of message passing
algorithms. In random graph -coloring, there is a precise conjecture as to
the location of the condensation phase transition in terms of a distributional
fixed point problem. In this paper we prove this conjecture for exceeding a
certain constant
Performance of Sequential Local Algorithms for the Random NAE--SAT Problem
We formalize the class of “sequential local algorithms" and show that these algorithms fail to find satisfying assignments on random instances of the “Not-All-Equal--SAT” (NAE--SAT) problem if the number of message passing iterations is bounded by a function moderately growing in the number of variables and if the clause-to-variable ratio is above for sufficiently large . Sequential local algorithms are those that iteratively set variables based on some local information and/or local randomness and then recurse on the reduced instance. Our model captures some weak abstractions of natural algorithms such as Survey Propagation (SP)-guided as well as Belief Propagation (BP)-guided decimation algorithms---two widely studied message-passing--based algorithms---when the number of message-passing rounds in these algorithms is restricted to be growing only moderately with the number of variables. The approach underlying our paper is based on an intricate geometry of the solution space of a random NAE--SAT problem. We show that above the threshold, the overlap structure of -tuples of nearly (in an appropriate sense) satisfying assignments exhibit a certain behavior expressed in the form of some constraints on pairwise distances between the assignments for appropriately chosen positive integer . We further show that if a sequential local algorithm succeeds in finding a satisfying assignment with probability bounded away from zero, then one can construct an -tuple of solutions violating these constraints, thus leading to a contradiction. Along with [D. Gamarnik and M. Sudan, Ann. Probab., to appear], where a similar approach was used in a (somewhat simpler) setting of nonsequential local algorithms, this result is the first work that directly links the overlap property of random constraint satisfaction problems to the computational hardness of finding satisfying assignments.National Science Foundation (U.S.) (CMMI-1335155