5,657 research outputs found

    Phase Transitions of the Typical Algorithmic Complexity of the Random Satisfiability Problem Studied with Linear Programming

    Full text link
    Here we study the NP-complete KK-SAT problem. Although the worst-case complexity of NP-complete problems is conjectured to be exponential, there exist parametrized random ensembles of problems where solutions can typically be found in polynomial time for suitable ranges of the parameter. In fact, random KK-SAT, with α=M/N\alpha=M/N as control parameter, can be solved quickly for small enough values of α\alpha. It shows a phase transition between a satisfiable phase and an unsatisfiable phase. For branch and bound algorithms, which operate in the space of feasible Boolean configurations, the empirically hardest problems are located only close to this phase transition. Here we study KK-SAT (K=3,4K=3,4) and the related optimization problem MAX-SAT by a linear programming approach, which is widely used for practical problems and allows for polynomial run time. In contrast to branch and bound it operates outside the space of feasible configurations. On the other hand, finding a solution within polynomial time is not guaranteed. We investigated several variants like including artificial objective functions, so called cutting-plane approaches, and a mapping to the NP-complete vertex-cover problem. We observed several easy-hard transitions, from where the problems are typically solvable (in polynomial time) using the given algorithms, respectively, to where they are not solvable in polynomial time. For the related vertex-cover problem on random graphs these easy-hard transitions can be identified with structural properties of the graphs, like percolation transitions. For the present random KK-SAT problem we have investigated numerous structural properties also exhibiting clear transitions, but they appear not be correlated to the here observed easy-hard transitions. This renders the behaviour of random KK-SAT more complex than, e.g., the vertex-cover problem.Comment: 11 pages, 5 figure

    Combinatorial simplex algorithms can solve mean payoff games

    Full text link
    A combinatorial simplex algorithm is an instance of the simplex method in which the pivoting depends on combinatorial data only. We show that any algorithm of this kind admits a tropical analogue which can be used to solve mean payoff games. Moreover, any combinatorial simplex algorithm with a strongly polynomial complexity (the existence of such an algorithm is open) would provide in this way a strongly polynomial algorithm solving mean payoff games. Mean payoff games are known to be in NP and co-NP; whether they can be solved in polynomial time is an open problem. Our algorithm relies on a tropical implementation of the simplex method over a real closed field of Hahn series. One of the key ingredients is a new scheme for symbolic perturbation which allows us to lift an arbitrary mean payoff game instance into a non-degenerate linear program over Hahn series.Comment: v1: 15 pages, 3 figures; v2: improved presentation, introduction expanded, 18 pages, 3 figure

    Network Flows

    Get PDF
    Not Availabl

    Smoothed Analysis of the Minimum-Mean Cycle Canceling Algorithm and the Network Simplex Algorithm

    Get PDF
    The minimum-cost flow (MCF) problem is a fundamental optimization problem with many applications and seems to be well understood. Over the last half century many algorithms have been developed to solve the MCF problem and these algorithms have varying worst-case bounds on their running time. However, these worst-case bounds are not always a good indication of the algorithms' performance in practice. The Network Simplex (NS) algorithm needs an exponential number of iterations for some instances, but it is considered the best algorithm in practice and performs best in experimental studies. On the other hand, the Minimum-Mean Cycle Canceling (MMCC) algorithm is strongly polynomial, but performs badly in experimental studies. To explain these differences in performance in practice we apply the framework of smoothed analysis. We show an upper bound of O(mn2log(n)log(ϕ))O(mn^2\log(n)\log(\phi)) for the number of iterations of the MMCC algorithm. Here nn is the number of nodes, mm is the number of edges, and ϕ\phi is a parameter limiting the degree to which the edge costs are perturbed. We also show a lower bound of Ω(mlog(ϕ))\Omega(m\log(\phi)) for the number of iterations of the MMCC algorithm, which can be strengthened to Ω(mn)\Omega(mn) when ϕ=Θ(n2)\phi=\Theta(n^2). For the number of iterations of the NS algorithm we show a smoothed lower bound of Ω(mmin{n,ϕ}ϕ)\Omega(m \cdot \min \{ n, \phi \} \cdot \phi).Comment: Extended abstract to appear in the proceedings of COCOON 201

    On the complexity of range searching among curves

    Full text link
    Modern tracking technology has made the collection of large numbers of densely sampled trajectories of moving objects widely available. We consider a fundamental problem encountered when analysing such data: Given nn polygonal curves SS in Rd\mathbb{R}^d, preprocess SS into a data structure that answers queries with a query curve qq and radius ρ\rho for the curves of SS that have \Frechet distance at most ρ\rho to qq. We initiate a comprehensive analysis of the space/query-time trade-off for this data structuring problem. Our lower bounds imply that any data structure in the pointer model model that achieves Q(n)+O(k)Q(n) + O(k) query time, where kk is the output size, has to use roughly Ω((n/Q(n))2)\Omega\left((n/Q(n))^2\right) space in the worst case, even if queries are mere points (for the discrete \Frechet distance) or line segments (for the continuous \Frechet distance). More importantly, we show that more complex queries and input curves lead to additional logarithmic factors in the lower bound. Roughly speaking, the number of logarithmic factors added is linear in the number of edges added to the query and input curve complexity. This means that the space/query time trade-off worsens by an exponential factor of input and query complexity. This behaviour addresses an open question in the range searching literature: whether it is possible to avoid the additional logarithmic factors in the space and query time of a multilevel partition tree. We answer this question negatively. On the positive side, we show we can build data structures for the \Frechet distance by using semialgebraic range searching. Our solution for the discrete \Frechet distance is in line with the lower bound, as the number of levels in the data structure is O(t)O(t), where tt denotes the maximal number of vertices of a curve. For the continuous \Frechet distance, the number of levels increases to O(t2)O(t^2)

    Computing Least Fixed Points of Probabilistic Systems of Polynomials

    Get PDF
    We study systems of equations of the form X1 = f1(X1, ..., Xn), ..., Xn = fn(X1, ..., Xn), where each fi is a polynomial with nonnegative coefficients that add up to 1. The least nonnegative solution, say mu, of such equation systems is central to problems from various areas, like physics, biology, computational linguistics and probabilistic program verification. We give a simple and strongly polynomial algorithm to decide whether mu=(1, ..., 1) holds. Furthermore, we present an algorithm that computes reliable sequences of lower and upper bounds on mu, converging linearly to mu. Our algorithm has these features despite using inexact arithmetic for efficiency. We report on experiments that show the performance of our algorithms.Comment: Published in the Proceedings of the 27th International Symposium on Theoretical Aspects of Computer Science (STACS). Technical Report is also available via arxiv.or
    corecore