258 research outputs found

    An Algorithm for Probabilistic Alternating Simulation

    Get PDF
    In probabilistic game structures, probabilistic alternating simulation (PA-simulation) relations preserve formulas defined in probabilistic alternating-time temporal logic with respect to the behaviour of a subset of players. We propose a partition based algorithm for computing the largest PA-simulation, which is to our knowledge the first such algorithm that works in polynomial time, by extending the generalised coarsest partition problem (GCPP) in a game-based setting with mixed strategies. The algorithm has higher complexities than those in the literature for non-probabilistic simulation and probabilistic simulation without mixed actions, but slightly improves the existing result for computing probabilistic simulation with respect to mixed actions.Comment: We've fixed a problem in the SOFSEM'12 conference versio

    Finite state verifiers with constant randomness

    Full text link
    We give a new characterization of NL\mathsf{NL} as the class of languages whose members have certificates that can be verified with small error in polynomial time by finite state machines that use a constant number of random bits, as opposed to its conventional description in terms of deterministic logarithmic-space verifiers. It turns out that allowing two-way interaction with the prover does not change the class of verifiable languages, and that no polynomially bounded amount of randomness is useful for constant-memory computers when used as language recognizers, or public-coin verifiers. A corollary of our main result is that the class of outcome problems corresponding to O(log n)-space bounded games of incomplete information where the universal player is allowed a constant number of moves equals NL.Comment: 17 pages. An improved versio

    Quantum Branching Programs and Space-Bounded Nonuniform Quantum Complexity

    Get PDF
    In this paper, the space complexity of nonuniform quantum computations is investigated. The model chosen for this are quantum branching programs, which provide a graphic description of sequential quantum algorithms. In the first part of the paper, simulations between quantum branching programs and nonuniform quantum Turing machines are presented which allow to transfer lower and upper bound results between the two models. In the second part of the paper, different variants of quantum OBDDs are compared with their deterministic and randomized counterparts. In the third part, quantum branching programs are considered where the performed unitary operation may depend on the result of a previous measurement. For this model a simulation of randomized OBDDs and exponential lower bounds are presented.Comment: 45 pages, 3 Postscript figures. Proofs rearranged, typos correcte

    State succinctness of two-way finite automata with quantum and classical states

    Full text link
    {\it Two-way quantum automata with quantum and classical states} (2QCFA) were introduced by Ambainis and Watrous in 2002. In this paper we study state succinctness of 2QCFA. For any mZ+m\in {\mathbb{Z}}^+ and any ϵ<1/2\epsilon<1/2, we show that: {enumerate} there is a promise problem Aeq(m)A^{eq}(m) which can be solved by a 2QCFA with one-sided error ϵ\epsilon in a polynomial expected running time with a constant number (that depends neither on mm nor on ε\varepsilon) of quantum states and O(log1ϵ)\mathbf{O}(\log{\frac{1}{\epsilon})} classical states, whereas the sizes of the corresponding {\it deterministic finite automata} (DFA), {\it two-way nondeterministic finite automata} (2NFA) and polynomial expected running time {\it two-way probabilistic finite automata} (2PFA) are at least 2m+22m+2, logm\sqrt{\log{m}}, and (logm)/b3\sqrt[3]{(\log m)/b}, respectively; there exists a language Ltwin(m)={wcww{a,b}}L^{twin}(m)=\{wcw| w\in\{a,b\}^*\} over the alphabet Σ={a,b,c}\Sigma=\{a,b,c\} which can be recognized by a 2QCFA with one-sided error ϵ\epsilon in an exponential expected running time with a constant number of quantum states and O(log1ϵ)\mathbf{O}(\log{\frac{1}{\epsilon})} classical states, whereas the sizes of the corresponding DFA, 2NFA and polynomial expected running time 2PFA are at least 2m2^m, m\sqrt{m}, and m/b3\sqrt[3]{m/b}, respectively; {enumerate} where bb is a constant.Comment: 26pages, comments and suggestions are welcom

    Fuel Efficient Computation in Passive Self-Assembly

    Get PDF
    In this paper we show that passive self-assembly in the context of the tile self-assembly model is capable of performing fuel efficient, universal computation. The tile self-assembly model is a premiere model of self-assembly in which particles are modeled by four-sided squares with glue types assigned to each tile edge. The assembly process is driven by positive and negative force interactions between glue types, allowing for tile assemblies floating in the plane to combine and break apart over time. We refer to this type of assembly model as passive in that the constituent parts remain unchanged throughout the assembly process regardless of their interactions. A computationally universal system is said to be fuel efficient if the number of tiles used up per computation step is bounded by a constant. Work within this model has shown how fuel guzzling tile systems can perform universal computation with only positive strength glue interactions. Recent work has introduced space-efficient, fuel-guzzling universal computation with the addition of negative glue interactions and the use of a powerful non-diagonal class of glue interactions. Other recent work has shown how to achieve fuel efficient computation within active tile self-assembly. In this paper we utilize negative interactions in the tile self-assembly model to achieve the first computationally universal passive tile self-assembly system that is both space and fuel-efficient. In addition, we achieve this result using a limited diagonal class of glue interactions

    Applying Formal Methods to Networking: Theory, Techniques and Applications

    Full text link
    Despite its great importance, modern network infrastructure is remarkable for the lack of rigor in its engineering. The Internet which began as a research experiment was never designed to handle the users and applications it hosts today. The lack of formalization of the Internet architecture meant limited abstractions and modularity, especially for the control and management planes, thus requiring for every new need a new protocol built from scratch. This led to an unwieldy ossified Internet architecture resistant to any attempts at formal verification, and an Internet culture where expediency and pragmatism are favored over formal correctness. Fortunately, recent work in the space of clean slate Internet design---especially, the software defined networking (SDN) paradigm---offers the Internet community another chance to develop the right kind of architecture and abstractions. This has also led to a great resurgence in interest of applying formal methods to specification, verification, and synthesis of networking protocols and applications. In this paper, we present a self-contained tutorial of the formidable amount of work that has been done in formal methods, and present a survey of its applications to networking.Comment: 30 pages, submitted to IEEE Communications Surveys and Tutorial
    corecore