5,657 research outputs found
Phase Transitions of the Typical Algorithmic Complexity of the Random Satisfiability Problem Studied with Linear Programming
Here we study the NP-complete -SAT problem. Although the worst-case
complexity of NP-complete problems is conjectured to be exponential, there
exist parametrized random ensembles of problems where solutions can typically
be found in polynomial time for suitable ranges of the parameter. In fact,
random -SAT, with as control parameter, can be solved quickly
for small enough values of . It shows a phase transition between a
satisfiable phase and an unsatisfiable phase. For branch and bound algorithms,
which operate in the space of feasible Boolean configurations, the empirically
hardest problems are located only close to this phase transition. Here we study
-SAT () and the related optimization problem MAX-SAT by a linear
programming approach, which is widely used for practical problems and allows
for polynomial run time. In contrast to branch and bound it operates outside
the space of feasible configurations. On the other hand, finding a solution
within polynomial time is not guaranteed. We investigated several variants like
including artificial objective functions, so called cutting-plane approaches,
and a mapping to the NP-complete vertex-cover problem. We observed several
easy-hard transitions, from where the problems are typically solvable (in
polynomial time) using the given algorithms, respectively, to where they are
not solvable in polynomial time. For the related vertex-cover problem on random
graphs these easy-hard transitions can be identified with structural properties
of the graphs, like percolation transitions. For the present random -SAT
problem we have investigated numerous structural properties also exhibiting
clear transitions, but they appear not be correlated to the here observed
easy-hard transitions. This renders the behaviour of random -SAT more
complex than, e.g., the vertex-cover problem.Comment: 11 pages, 5 figure
Combinatorial simplex algorithms can solve mean payoff games
A combinatorial simplex algorithm is an instance of the simplex method in
which the pivoting depends on combinatorial data only. We show that any
algorithm of this kind admits a tropical analogue which can be used to solve
mean payoff games. Moreover, any combinatorial simplex algorithm with a
strongly polynomial complexity (the existence of such an algorithm is open)
would provide in this way a strongly polynomial algorithm solving mean payoff
games. Mean payoff games are known to be in NP and co-NP; whether they can be
solved in polynomial time is an open problem. Our algorithm relies on a
tropical implementation of the simplex method over a real closed field of Hahn
series. One of the key ingredients is a new scheme for symbolic perturbation
which allows us to lift an arbitrary mean payoff game instance into a
non-degenerate linear program over Hahn series.Comment: v1: 15 pages, 3 figures; v2: improved presentation, introduction
expanded, 18 pages, 3 figure
Smoothed Analysis of the Minimum-Mean Cycle Canceling Algorithm and the Network Simplex Algorithm
The minimum-cost flow (MCF) problem is a fundamental optimization problem
with many applications and seems to be well understood. Over the last half
century many algorithms have been developed to solve the MCF problem and these
algorithms have varying worst-case bounds on their running time. However, these
worst-case bounds are not always a good indication of the algorithms'
performance in practice. The Network Simplex (NS) algorithm needs an
exponential number of iterations for some instances, but it is considered the
best algorithm in practice and performs best in experimental studies. On the
other hand, the Minimum-Mean Cycle Canceling (MMCC) algorithm is strongly
polynomial, but performs badly in experimental studies.
To explain these differences in performance in practice we apply the
framework of smoothed analysis. We show an upper bound of
for the number of iterations of the MMCC algorithm.
Here is the number of nodes, is the number of edges, and is a
parameter limiting the degree to which the edge costs are perturbed. We also
show a lower bound of for the number of iterations of the
MMCC algorithm, which can be strengthened to when
. For the number of iterations of the NS algorithm we show a
smoothed lower bound of .Comment: Extended abstract to appear in the proceedings of COCOON 201
On the complexity of range searching among curves
Modern tracking technology has made the collection of large numbers of
densely sampled trajectories of moving objects widely available. We consider a
fundamental problem encountered when analysing such data: Given polygonal
curves in , preprocess into a data structure that answers
queries with a query curve and radius for the curves of that
have \Frechet distance at most to .
We initiate a comprehensive analysis of the space/query-time trade-off for
this data structuring problem. Our lower bounds imply that any data structure
in the pointer model model that achieves query time, where is
the output size, has to use roughly space in
the worst case, even if queries are mere points (for the discrete \Frechet
distance) or line segments (for the continuous \Frechet distance). More
importantly, we show that more complex queries and input curves lead to
additional logarithmic factors in the lower bound. Roughly speaking, the number
of logarithmic factors added is linear in the number of edges added to the
query and input curve complexity. This means that the space/query time
trade-off worsens by an exponential factor of input and query complexity. This
behaviour addresses an open question in the range searching literature: whether
it is possible to avoid the additional logarithmic factors in the space and
query time of a multilevel partition tree. We answer this question negatively.
On the positive side, we show we can build data structures for the \Frechet
distance by using semialgebraic range searching. Our solution for the discrete
\Frechet distance is in line with the lower bound, as the number of levels in
the data structure is , where denotes the maximal number of vertices
of a curve. For the continuous \Frechet distance, the number of levels
increases to
Computing Least Fixed Points of Probabilistic Systems of Polynomials
We study systems of equations of the form X1 = f1(X1, ..., Xn), ..., Xn =
fn(X1, ..., Xn), where each fi is a polynomial with nonnegative coefficients
that add up to 1. The least nonnegative solution, say mu, of such equation
systems is central to problems from various areas, like physics, biology,
computational linguistics and probabilistic program verification. We give a
simple and strongly polynomial algorithm to decide whether mu=(1, ..., 1)
holds. Furthermore, we present an algorithm that computes reliable sequences of
lower and upper bounds on mu, converging linearly to mu. Our algorithm has
these features despite using inexact arithmetic for efficiency. We report on
experiments that show the performance of our algorithms.Comment: Published in the Proceedings of the 27th International Symposium on
Theoretical Aspects of Computer Science (STACS). Technical Report is also
available via arxiv.or
- …