230 research outputs found
Heuristic average-case analysis of the backtrack resolution of random 3-Satisfiability instances
An analysis of the average-case complexity of solving random 3-Satisfiability
(SAT) instances with backtrack algorithms is presented. We first interpret
previous rigorous works in a unifying framework based on the statistical
physics notions of dynamical trajectories, phase diagram and growth process. It
is argued that, under the action of the Davis--Putnam--Loveland--Logemann
(DPLL) algorithm, 3-SAT instances are turned into 2+p-SAT instances whose
characteristic parameters (ratio alpha of clauses per variable, fraction p of
3-clauses) can be followed during the operation, and define resolution
trajectories. Depending on the location of trajectories in the phase diagram of
the 2+p-SAT model, easy (polynomial) or hard (exponential) resolutions are
generated. Three regimes are identified, depending on the ratio alpha of the
3-SAT instance to be solved. Lower sat phase: for small ratios, DPLL almost
surely finds a solution in a time growing linearly with the number N of
variables. Upper sat phase: for intermediate ratios, instances are almost
surely satisfiable but finding a solution requires exponential time (2 ^ (N
omega) with omega>0) with high probability. Unsat phase: for large ratios,
there is almost always no solution and proofs of refutation are exponential. An
analysis of the growth of the search tree in both upper sat and unsat regimes
is presented, and allows us to estimate omega as a function of alpha. This
analysis is based on an exact relationship between the average size of the
search tree and the powers of the evolution operator encoding the elementary
steps of the search heuristic.Comment: to appear in Theoretical Computer Scienc
Hiding Satisfying Assignments: Two are Better than One
The evaluation of incomplete satisfiability solvers depends critically on the
availability of hard satisfiable instances. A plausible source of such
instances consists of random k-SAT formulas whose clauses are chosen uniformly
from among all clauses satisfying some randomly chosen truth assignment A.
Unfortunately, instances generated in this manner tend to be relatively easy
and can be solved efficiently by practical heuristics. Roughly speaking, as the
formula's density increases, for a number of different algorithms, A acts as a
stronger and stronger attractor. Motivated by recent results on the geometry of
the space of satisfying truth assignments of random k-SAT and NAE-k-SAT
formulas, we introduce a simple twist on this basic model, which appears to
dramatically increase its hardness. Namely, in addition to forbidding the
clauses violated by the hidden assignment A, we also forbid the clauses
violated by its complement, so that both A and complement of A are satisfying.
It appears that under this "symmetrization'' the effects of the two attractors
largely cancel out, making it much harder for algorithms to find any truth
assignment. We give theoretical and experimental evidence supporting this
assertion.Comment: Preliminary version appeared in AAAI 200
Exponentially hard problems are sometimes polynomial, a large deviation analysis of search algorithms for the random Satisfiability problem, and its application to stop-and-restart resolutions
A large deviation analysis of the solving complexity of random
3-Satisfiability instances slightly below threshold is presented. While finding
a solution for such instances demands an exponential effort with high
probability, we show that an exponentially small fraction of resolutions
require a computation scaling linearly in the size of the instance only. This
exponentially small probability of easy resolutions is analytically calculated,
and the corresponding exponent shown to be smaller (in absolute value) than the
growth exponent of the typical resolution time. Our study therefore gives some
theoretical basis to heuristic stop-and-restart solving procedures, and
suggests a natural cut-off (the size of the instance) for the restart.Comment: Revtex file, 4 figure
Analysis of the computational complexity of solving random satisfiability problems using branch and bound search algorithms
The computational complexity of solving random 3-Satisfiability (3-SAT)
problems is investigated. 3-SAT is a representative example of hard
computational tasks; it consists in knowing whether a set of alpha N randomly
drawn logical constraints involving N Boolean variables can be satisfied
altogether or not. Widely used solving procedures, as the
Davis-Putnam-Loveland-Logeman (DPLL) algorithm, perform a systematic search for
a solution, through a sequence of trials and errors represented by a search
tree. In the present study, we identify, using theory and numerical
experiments, easy (size of the search tree scaling polynomially with N) and
hard (exponential scaling) regimes as a function of the ratio alpha of
constraints per variable. The typical complexity is explicitly calculated in
the different regimes, in very good agreement with numerical simulations. Our
theoretical approach is based on the analysis of the growth of the branches in
the search tree under the operation of DPLL. On each branch, the initial 3-SAT
problem is dynamically turned into a more generic 2+p-SAT problem, where p and
1-p are the fractions of constraints involving three and two variables
respectively. The growth of each branch is monitored by the dynamical evolution
of alpha and p and is represented by a trajectory in the static phase diagram
of the random 2+p-SAT problem. Depending on whether or not the trajectories
cross the boundary between phases, single branches or full trees are generated
by DPLL, resulting in easy or hard resolutions.Comment: 37 RevTeX pages, 15 figures; submitted to Phys.Rev.
- …