26 research outputs found

    On the limits of the communication complexity technique for proving lower bounds on the size of minimal NFA’s

    Get PDF
    AbstractIn contrast to the minimization of deterministic finite automata (DFA’s), the task of constructing a minimal nondeterministic finite automaton (NFA) for a given NFA is PSPACE-complete. Moreover, there are no polynomial approximation algorithms with a constant approximation ratio for estimating the number of states of minimal NFA’s.Since one is unable to efficiently estimate the size of a minimal NFA in an efficient way, one should ask at least for developing mathematical proof methods that help to prove good lower bounds on the size of a minimal NFA for a given regular language. Here we consider the robust and most successful lower bound proof technique that is based on communication complexity. In this paper it is proved that even a strong generalization of this method fails for some concrete regular languages.“To fail” is considered here in a very strong sense. There is an exponential gap between the size of a minimal NFA and the achievable lower bound for a specific sequence of regular languages.The generalization of the concept of communication protocols is also strong here. It is shown that cutting the input word into 2O(n1/4) pieces for a size n of a minimal nondeterministic finite automaton and investigating the necessary communication transfer between these pieces as parties of a multiparty protocol does not suffice to get good lower bounds on the size of minimal nondeterministic automata. It seems that for some regular languages one cannot really abstract from the automata model that cuts the input words into particular symbols of the alphabet and reads them one by one using its input head

    Quantum Branching Programs and Space-Bounded Nonuniform Quantum Complexity

    Get PDF
    In this paper, the space complexity of nonuniform quantum computations is investigated. The model chosen for this are quantum branching programs, which provide a graphic description of sequential quantum algorithms. In the first part of the paper, simulations between quantum branching programs and nonuniform quantum Turing machines are presented which allow to transfer lower and upper bound results between the two models. In the second part of the paper, different variants of quantum OBDDs are compared with their deterministic and randomized counterparts. In the third part, quantum branching programs are considered where the performed unitary operation may depend on the result of a previous measurement. For this model a simulation of randomized OBDDs and exponential lower bounds are presented.Comment: 45 pages, 3 Postscript figures. Proofs rearranged, typos correcte

    On the connection of probabilistic model checking, planning, and learning for system verification

    Get PDF
    This thesis presents approaches using techniques from the model checking, planning, and learning community to make systems more reliable and perspicuous. First, two heuristic search and dynamic programming algorithms are adapted to be able to check extremal reachability probabilities, expected accumulated rewards, and their bounded versions, on general Markov decision processes (MDPs). Thereby, the problem space originally solvable by these algorithms is enlarged considerably. Correctness and optimality proofs for the adapted algorithms are given, and in a comprehensive case study on established benchmarks it is shown that the implementation, called Modysh, is competitive with state-of-the-art model checkers and even outperforms them on very large state spaces. Second, Deep Statistical Model Checking (DSMC) is introduced, usable for quality assessment and learning pipeline analysis of systems incorporating trained decision-making agents, like neural networks (NNs). The idea of DSMC is to use statistical model checking to assess NNs resolving nondeterminism in systems modeled as MDPs. The versatility of DSMC is exemplified in a number of case studies on Racetrack, an MDP benchmark designed for this purpose, flexibly modeling the autonomous driving challenge. In a comprehensive scalability study it is demonstrated that DSMC is a lightweight technique tackling the complexity of NN analysis in combination with the state space explosion problem.Diese Arbeit präsentiert Ansätze, die Techniken aus dem Model Checking, Planning und Learning Bereich verwenden, um Systeme verlässlicher und klarer verständlich zu machen. Zuerst werden zwei Algorithmen für heuristische Suche und dynamisches Programmieren angepasst, um Extremwerte für Erreichbarkeitswahrscheinlichkeiten, Erwartungswerte für Kosten und beschränkte Varianten davon, auf generellen Markov Entscheidungsprozessen (MDPs) zu untersuchen. Damit wird der Problemraum, der ursprünglich mit diesen Algorithmen gelöst wurde, deutlich erweitert. Korrektheits- und Optimalitätsbeweise für die angepassten Algorithmen werden gegeben und in einer umfassenden Fallstudie wird gezeigt, dass die Implementierung, namens Modysh, konkurrenzfähig mit den modernsten Model Checkern ist und deren Leistung auf sehr großen Zustandsräumen sogar übertrifft. Als Zweites wird Deep Statistical Model Checking (DSMC) für die Qualitätsbewertung und Lernanalyse von Systemen mit integrierten trainierten Entscheidungsgenten, wie z.B. neuronalen Netzen (NN), eingeführt. Die Idee von DSMC ist es, statistisches Model Checking zur Bewertung von NNs zu nutzen, die Nichtdeterminismus in Systemen, die als MDPs modelliert sind, auflösen. Die Vielseitigkeit des Ansatzes wird in mehreren Fallbeispielen auf Racetrack gezeigt, einer MDP Benchmark, die zu diesem Zweck entwickelt wurde und die Herausforderung des autonomen Fahrens flexibel modelliert. In einer umfassenden Skalierbarkeitsstudie wird demonstriert, dass DSMC eine leichtgewichtige Technik ist, die die Komplexität der NN-Analyse in Kombination mit dem State Space Explosion Problem bewältigt

    On the power of nondeterminism and Las Vegas randomization for two-dimensional finite automata

    Get PDF
    AbstractThe goal of this work is to investigate the computational power of nondeterminism and Las Vegas randomization for two-dimensional finite automata. The following three results are the main contribution of this paper: (i)Las Vegas (three-way) two-dimensional finite automata are more powerful than (three-way) two-dimensional deterministic ones.(ii)Three-way two-dimensional nondeterministic finite automata are more powerful than three-way two-dimensional Las Vegas finite automata.(iii)There is a strong hierarchy based on the number of computations (as measure of the degree of nondeterminism) for three-way two-dimensional finite automata.These results contrast with the situation for one-way and two-way finite automata, where all these computation modes have the same acceptance power, and the differences may occur only in the sizes of automata. Results (i) and (ii) provide the first such simultaneous acceptance separation between nondeterminism, Las Vegas, and determinism for a computing model

    Robust network computation

    Get PDF
    Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2005.Includes bibliographical references (p. 91-98).In this thesis, we present various models of distributed computation and algorithms for these models. The underlying theme is to come up with fast algorithms that can tolerate faults in the underlying network. We begin with the classical message-passing model of computation, surveying many known results. We give a new, universally optimal, edge-biconnectivity algorithm for the classical model. We also give a near-optimal sub-linear algorithm for identifying bridges, when all nodes are activated simultaneously. After discussing some ways in which the classical model is unrealistic, we survey known techniques for adapting the classical model to the real world. We describe a new balancing model of computation. The intent is that algorithms in this model should be automatically fault-tolerant. Existing algorithms that can be expressed in this model are discussed, including ones for clustering, maximum flow, and synchronization. We discuss the use of agents in our model, and give new agent-based algorithms for census and biconnectivity. Inspired by the balancing model, we look at two problems in more depth.(cont.) First, we give matching upper and lower bounds on the time complexity of the census algorithm, and we show how the census algorithm can be used to name nodes uniquely in a faulty network. Second, we consider using discrete harmonic functions as a computational tool. These functions are a natural exemplar of the balancing model. We prove new results concerning the stability and convergence of discrete harmonic functions, and describe a method which we call Eulerization for speeding up convergence.by David Pritchard.M.Eng

    LIPIcs, Volume 261, ICALP 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 261, ICALP 2023, Complete Volum
    corecore