15,653 research outputs found
Phase Transitions of the Typical Algorithmic Complexity of the Random Satisfiability Problem Studied with Linear Programming
Here we study the NP-complete -SAT problem. Although the worst-case
complexity of NP-complete problems is conjectured to be exponential, there
exist parametrized random ensembles of problems where solutions can typically
be found in polynomial time for suitable ranges of the parameter. In fact,
random -SAT, with as control parameter, can be solved quickly
for small enough values of . It shows a phase transition between a
satisfiable phase and an unsatisfiable phase. For branch and bound algorithms,
which operate in the space of feasible Boolean configurations, the empirically
hardest problems are located only close to this phase transition. Here we study
-SAT () and the related optimization problem MAX-SAT by a linear
programming approach, which is widely used for practical problems and allows
for polynomial run time. In contrast to branch and bound it operates outside
the space of feasible configurations. On the other hand, finding a solution
within polynomial time is not guaranteed. We investigated several variants like
including artificial objective functions, so called cutting-plane approaches,
and a mapping to the NP-complete vertex-cover problem. We observed several
easy-hard transitions, from where the problems are typically solvable (in
polynomial time) using the given algorithms, respectively, to where they are
not solvable in polynomial time. For the related vertex-cover problem on random
graphs these easy-hard transitions can be identified with structural properties
of the graphs, like percolation transitions. For the present random -SAT
problem we have investigated numerous structural properties also exhibiting
clear transitions, but they appear not be correlated to the here observed
easy-hard transitions. This renders the behaviour of random -SAT more
complex than, e.g., the vertex-cover problem.Comment: 11 pages, 5 figure
Computational Complexity for Physicists
These lecture notes are an informal introduction to the theory of
computational complexity and its links to quantum computing and statistical
mechanics.Comment: references updated, reprint available from
http://itp.nat.uni-magdeburg.de/~mertens/papers/complexity.shtm
Fast optimization algorithms and the cosmological constant
Denef and Douglas have observed that in certain landscape models the problem
of finding small values of the cosmological constant is a large instance of an
NP-hard problem. The number of elementary operations (quantum gates) needed to
solve this problem by brute force search exceeds the estimated computational
capacity of the observable universe. Here we describe a way out of this
puzzling circumstance: despite being NP-hard, the problem of finding a small
cosmological constant can be attacked by more sophisticated algorithms whose
performance vastly exceeds brute force search. In fact, in some parameter
regimes the average-case complexity is polynomial. We demonstrate this by
explicitly finding a cosmological constant of order in a randomly
generated -dimensional ADK landscape.Comment: 19 pages, 5 figure
Statistical mechanics of the vertex-cover problem
We review recent progress in the study of the vertex-cover problem (VC). VC
belongs to the class of NP-complete graph theoretical problems, which plays a
central role in theoretical computer science. On ensembles of random graphs, VC
exhibits an coverable-uncoverable phase transition. Very close to this
transition, depending on the solution algorithm, easy-hard transitions in the
typical running time of the algorithms occur.
We explain a statistical mechanics approach, which works by mapping VC to a
hard-core lattice gas, and then applying techniques like the replica trick or
the cavity approach. Using these methods, the phase diagram of VC could be
obtained exactly for connectivities , where VC is replica symmetric.
Recently, this result could be confirmed using traditional mathematical
techniques. For , the solution of VC exhibits full replica symmetry
breaking.
The statistical mechanics approach can also be used to study analytically the
typical running time of simple complete and incomplete algorithms for VC.
Finally, we describe recent results for VC when studied on other ensembles of
finite- and infinite-dimensional graphs.Comment: review article, 26 pages, 9 figures, to appear in J. Phys. A: Math.
Ge
Spin Glasses, Boolean Satisfiability, and Survey Propagation
In recent years statistical physics and computational complexity have found mutually interesting subjects of research. The theory of spin glasses from statistical physics has been successfully applied to the boolean satisfiability problem, which is the canonical topic of computational complexity.
The study of spin glasses originated from experimental studies of the magnetic properties of impure metallic alloys, but soon the study of the theoretical models outshone the interest in the experimental systems. The model studied in this thesis is that of Ising spins with random interactions. In this thesis we discuss two analytical derivations on spin glasses: the famous replica trick on the Sherrington-Kirkpatrick model and the cavity method on a Bethe lattice spin glass.
Computational complexity theory is a branch of theoretical computer science that studies how the running time of algorithms scales with the size of the input. Two important classes of algorithms or problems are P and NP, or colloquially easy and hard problems. The first problem to be proven to belong to the class of NP-complete problems is that of boolean satisfiability, i.e., the study of whether there is an assignment of variables for a random boolean formula so that the formula is satisfied. The boolean satisfiability problem can be tackled with spin glass theory; the cavity method can be applied to it.
Boolean satisfiability exhibits a phase transition. As one increases the ratio of constraints to variables the probability of a random formula being satisfiable drops from unity to zero. This transition of random formulas from satisfiable to unsatisfiable is continuous for small formulas. It grows sharper with increasing problem size and becomes discrete at the limit of an infinite number of variables. The cavity method gives a value for the location of the phase transition that is in agreement with the numerical value.
The cavity method is an analytical tool for studying average values over a distribution, but it introduces so called surveys that can also be calculated numerically for a single instance. These surveys inspire the survey propagation algorithm that is implemented as a numerical program to efficiently solve large instances of random boolean satisfiability problems.
In this thesis I present a parallel version of survey propagation that achieves a speedup by a factor of 3 with 4 processors. With the improved version we are able to gain further knowledge on the detailed workings of survey propagation. It is found, firstly, that the number of iterations needed for one convergence of survey propagation depends on the number of variables, seemingly as ln(N). Secondly, it is found that the constraint to variable ratio for which survey propagation succeeds is dependent on the number of variables
Simplest random K-satisfiability problem
We study a simple and exactly solvable model for the generation of random
satisfiability problems. These consist of random boolean constraints
which are to be satisfied simultaneously by logical variables. In
statistical-mechanics language, the considered model can be seen as a diluted
p-spin model at zero temperature. While such problems become extraordinarily
hard to solve by local search methods in a large region of the parameter space,
still at least one solution may be superimposed by construction. The
statistical properties of the model can be studied exactly by the replica
method and each single instance can be analyzed in polynomial time by a simple
global solution method. The geometrical/topological structures responsible for
dynamic and static phase transitions as well as for the onset of computational
complexity in local search method are thoroughly analyzed. Numerical analysis
on very large samples allows for a precise characterization of the critical
scaling behaviour.Comment: 14 pages, 5 figures, to appear in Phys. Rev. E (Feb 2001). v2: minor
errors and references correcte
- âŠ