9 research outputs found

    A Polynomial Time Algorithm for Deciding Branching Bisimilarity on Totally Normed BPA

    Full text link
    Strong bisimilarity on normed BPA is polynomial-time decidable, while weak bisimilarity on totally normed BPA is NP-hard. It is natural to ask where the computational complexity of branching bisimilarity on totally normed BPA lies. This paper confirms that this problem is polynomial-time decidable. To our knowledge, in the presence of silent transitions, this is the first bisimilarity checking algorithm on infinite state systems which runs in polynomial time. This result spots an instance in which branching bisimilarity and weak bisimilarity are both decidable but lie in different complexity classes (unless NP=P), which is not known before. The algorithm takes the partition refinement approach and the final implementation can be thought of as a generalization of the previous algorithm of Czerwi\'{n}ski and Lasota. However, unexpectedly, the correctness of the algorithm cannot be directly generalized from previous works, and the correctness proof turns out to be subtle. The proof depends on the existence of a carefully defined refinement operation fitted for our algorithm and the proposal of elaborately developed techniques, which are quite different from previous works.Comment: 32 page

    Branching Bisimilarity on Normed BPA Is EXPTIME-complete

    Full text link
    We put forward an exponential-time algorithm for deciding branching bisimilarity on normed BPA (Bacis Process Algebra) systems. The decidability of branching (or weak) bisimilarity on normed BPA was once a long standing open problem which was closed by Yuxi Fu. The EXPTIME-hardness is an inference of a slight modification of the reduction presented by Richard Mayr. Our result claims that this problem is EXPTIME-complete.Comment: We correct many typing errors, add several remarks and an interesting toy exampl

    Towards weak bisimilarity on a class of parallel processes.

    Get PDF
    A directed labelled graph may be used, at a certain abstraction, to represent a system's behaviour. Its nodes, the possible states the system can be in; its arrows labelled by the actions required to move from one state to another. Processes are, for our purposes, synonymous with these labelled transition systems. With this view a well-studied notion of behavioural equivalence is bisimilarity, where processes are bisimilar when whatever one can do, the other can match, while maintaining bisimilarity. Weak bisimilarity accommodates a notion of silent or internal action. A natural class of labelled transition systems is given by considering the derivations of commutative context-free grammars in Greibach Normal Form: the Basic Parallel Processes (BPP), introduced by Christensen in his PhD thesis. They represent a simple model of communication-free parallel computation, and for them bisimilarity is PSPACE-complete. Weak bisimilarity is believed to be decidable, but only partial results exist. Non-bisimilarity is trivially semidecidable on BPP (each process has finitely many next states, so the state space can be explored until a mis-match is found); the research effort in proving it fully decidable centred on semideciding the positive case. Conversely, weak bisimilarity has been known to be semidecidable for a decade, but no method for semideciding inequivalence has yet been found - the presence of silent actions allows a process to have infinitely many possible successor states, so simple exploration is no longer possible. Weak bisimilarity is defined coinductively, but may be approached, and even reached, by its inductively defined approximants. Game theoretically, these change the Defender's winning condition from survival for infinitely many turns to survival for K turns, for an ordinal k, creating a hierarchy of relations successively closer to full weak bisimilarity. It can be seen that on any set of processes this approximant hierarchy collapses: there will always exist some K such that the kth approximant coincides with weak bisimilarity. One avenue towards the semidecidability of non- weak bisimilarity is the decidability of its approximants. It is a long-standing conjecture that on BPP the weak approximant hierarchy collapses at o x 2. If true, in order to semidecide inequivalence it would suffice to be able to decide the o + n approximants. Again, there exist only limited results: the finite approximants are known to be decidable, but no progress has been made on the wth approximant, and thus far the best proven lower-bound of collapse is w1CK (the least non-recursive ordinal number). We significantly improve this bound to okx2(for a k-variable BPP); a key part of the proof being a novel constructive version of Dickson's Lemma. The distances-to-disablings or DD functions were invented by Jancar in order to prove the PSPACE-completeness of bisimilarity on BPP. At the end of his paper is a conjecture that weak bisimilarity might be amenable to the theory; a suggestion we have taken up. We generalise and extend the DD functions, widening the subset of BPP on which weak bisimilarity is known to be computable, and creating a new means for testing inequivalence. The thesis ends with two conjectures. The first, that our extended DD functions in fact capture weak bisimilarity on full BPP (a corollary of which would be to take the lower bound of approximant collapse to and second, that they are computable, which would enable us to semidecide inequivalence, and hence give us the decidability of weak bisimilarity

    Arrows for knowledge-based circuits

    No full text
    Knowledge-based programs (KBPs) are a formalism for directly relating agents' knowledge and behaviour in a way that has proven useful for specifying distributed systems. Here we present a scheme for compiling KBPs to executable automata in finite environments with a proof of correctness in Isabelle/HOL. We use Arrows, a functional programming abstraction, to structure a prototype domain-specific synchronous language embedded in Haskell. By adapting our compilation scheme to use symbolic representations we can apply it to several examples of reasonable size

    New techniques for adaptive program optimization

    Get PDF
    Adaptive optimization technology is a key ingredient in modern runtime systems. This technology aims at improving performance by making optimization decisions on the basis of a program’s observed behavior. Application virtual machines indeed face different and perhaps more compelling issues compared to traditional static optimizers, as dynamic language features can force the deferral of most effective optimizations until run time. In this thesis, we present novel ideas to improve adaptive optimization, focusing on two main problems: collecting fine-grained program profiles with low overhead to guide feedback-directed optimization, and supporting continuous optimization and deoptimization by diverting execution across dynamically generated code versions. We present two profiling techniques: the first works at inter-procedural level to collect calling context information for hot code portions, while the second captures cyclic-path profiles within a function’s boundaries. Both techniques rely on efficient and elegant data structures, advancing the state of the art of the theory and practice of the performance profiling literature. We then focus our attention on supporting continuous optimization through on-stack replacement (OSR) mechanisms. We devise a new OSR framework encoded entirely at intermediate-representation level, which extends the best OSR practices with the ability to perform OSR at nearly any program location. Our techniques pave the road to aggressive optimizations and debugging techniques that were not supported by previous approaches. The main technical challenge is how to automatically generate compensation code to fix the program’s state across an OSR transition between different code versions. We present a conceptual framework for OSR, distilling its essence to a core calculus with an operational semantics. Using bisimulation techniques, we describe how OSR can be correctly supported in the presence of common compiler optimizations, providing the first soundness results in this context. We implement our ideas in production systems such as Jikes RVM and the LLVM compiler toolchain, and evaluate their performance against a variety of prominent benchmarks. We investigate the end-to-end utility of our techniques in a series of case studies: we illustrate two possible applications of multi-iteration path profiling, and show how our OSR techniques advance the state of the art for MATLAB code optimization and for source-level debugging of optimized code. Part of the results of this thesis have been published in PLDI, OOPSLA, CGO, and Software Practice and Experience

    Generalization Through the Lens of Learning Dynamics

    Full text link
    A machine learning (ML) system must learn not only to match the output of a target function on a training set, but also to generalize to novel situations in order to yield accurate predictions at deployment. In most practical applications, the user cannot exhaustively enumerate every possible input to the model; strong generalization performance is therefore crucial to the development of ML systems which are performant and reliable enough to be deployed in the real world. While generalization is well-understood theoretically in a number of hypothesis classes, the impressive generalization performance of deep neural networks has stymied theoreticians. In deep reinforcement learning (RL), our understanding of generalization is further complicated by the conflict between generalization and stability in widely-used RL algorithms. This thesis will provide insight into generalization by studying the learning dynamics of deep neural networks in both supervised and reinforcement learning tasks.Comment: PhD Thesi

    On the computational complexity of bisimulation, redux

    Get PDF
    AbstractParis Kanellakis and the second author (Smolka) were among the first to investigate the computational complexity of bisimulation, and the first and third authors (Moller and Srba) have long-established track records in the field. Smolka and Moller have also written a brief survey about the computational complexity of bisimulation [ACM Comput. Surv. 27(2) (1995) 287]. The authors believe that the special issue of Information and Computation devoted to PCK50: Principles of Computing and Knowledge: Paris C. Kanellakis Memorial Workshop represents an ideal opportunity for an up-to-date look at the subject
    corecore