8 research outputs found

    Simulation of Two-Way Pushdown Automata Revisited

    Get PDF
    The linear-time simulation of 2-way deterministic pushdown automata (2DPDA) by the Cook and Jones constructions is revisited. Following the semantics-based approach by Jones, an interpreter is given which, when extended with random-access memory, performs a linear-time simulation of 2DPDA. The recursive interpreter works without the dump list of the original constructions, which makes Cook's insight into linear-time simulation of exponential-time automata more intuitive and the complexity argument clearer. The simulation is then extended to 2-way nondeterministic pushdown automata (2NPDA) to provide for a cubic-time recognition of context-free languages. The time required to run the final construction depends on the degree of nondeterminism. The key mechanism that enables the polynomial-time simulations is the sharing of computations by memoization.Comment: In Proceedings Festschrift for Dave Schmidt, arXiv:1309.455

    An in-between "implicit" and "explicit" complexity: Automata

    Get PDF
    Implicit Computational Complexity makes two aspects implicit, by manipulating programming languages rather than models of com-putation, and by internalizing the bounds rather than using external measure. We survey how automata theory contributed to complexity with a machine-dependant with implicit bounds model

    Memoization for Unary Logic Programming: Characterizing PTIME

    Full text link
    We give a characterization of deterministic polynomial time computation based on an algebraic structure called the resolution semiring, whose elements can be understood as logic programs or sets of rewriting rules over first-order terms. More precisely, we study the restriction of this framework to terms (and logic programs, rewriting rules) using only unary symbols. We prove it is complete for polynomial time computation, using an encoding of pushdown automata. We then introduce an algebraic counterpart of the memoization technique in order to show its PTIME soundness. We finally relate our approach and complexity results to complexity of logic programming. As an application of our techniques, we show a PTIME-completeness result for a class of logic programming queries which use only unary function symbols.Comment: Soumis {\`a} LICS 201

    A sound definitional interpreter for a simply typed functional language

    Get PDF
    In this paper, we develop, in the proof assistant Coq, a definitional interpreter and a type-checker for a simply typed functional language, and formally prove that the mentioned type-checker is sound with respect to the definitional interpreter via progress and preservation. To represent binders, we embark on the choice of “concrete syntax” in which parameters are just names (or strings)

    On the Resolution Semiring

    Get PDF
    In this thesis, we study a semiring structure with a product based on theresolution rule of logic programming. This mathematical object was introducedinitially in the setting of the geometry of interaction program in order to modelthe cut-elimination procedure of linear logic. It provides us with an algebraicand abstract setting, while being presented in a syntactic and concrete way, inwhich a theoretical study of computation can be carried on.We will review first the interactive interpretation of proof theory withinthis semiring via the categorical axiomatization of the geometry of interactionapproach. This interpretation establishes a way to translate functional programsinto a very simple form of logic programs.Secondly, complexity theory problematics will be considered: while thenilpotency problem in the semiring we study is undecidable in general, it willappear that certain restrictions allow for characterizations of (deterministicand non-deterministic) logarithmic space and (deterministic) polynomial timecomputation

    Mechanizing Abstract Interpretation

    Get PDF
    It is important when developing software to verify the absence of undesirable behavior such as crashes, bugs and security vulnerabilities. Some settings require high assurance in verification results, e.g., for embedded software in automobiles or airplanes. To achieve high assurance in these verification results, formal methods are used to automatically construct or check proofs of their correctness. However, achieving high assurance for program analysis results is challenging, and current methods are ill suited for both complex critical domains and mainstream use. To verify the correctness of software we consider program analyzers---automated tools which detect software defects---and to achieve high assurance in verification results we consider mechanized verification---a rigorous process for establishing the correctness of program analyzers via computer-checked proofs. The key challenges to designing verified program analyzers are: (1) achieving an analyzer design for a given programming language and correctness property; (2) achieving an implementation for the design; and (3) achieving a mechanized verification that the implementation is correct w.r.t. the design. The state of the art in (1) and (2) is to use abstract interpretation: a guiding mathematical framework for systematically constructing analyzers directly from programming language semantics. However, achieving (3) in the presence of abstract interpretation has remained an open problem since the late 1990's. Furthermore, even the state-of-the art which achieves (3) in the absence of abstract interpretation suffers from the inability to be reused in the presence of new analyzer designs or programming language features. First, we solve the open problem which has prevented the combination of abstract interpretation (and in particular, calculational abstract interpretation) with mechanized verification, which advances the state of the art in designing, implementing, and verifying analyzers for critical software. We do this through a new mathematical framework Constructive Galois Connections which supports synthesizing specifications for program analyzers, calculating implementations from these induced specifications, and is amenable to mechanized verification. Finally, we introduce reusable components for implementing analyzers for a wide range of designs and semantics. We do this though two new frameworks Galois Transformers and Definitional Abstract Interpreters. These frameworks tightly couple analyzer design decisions, implementation fragments, and verification properties into compositional components which are (target) programming-language independent and amenable to mechanized verification. Variations in the analysis design are then recovered by simply re-assembling the combination of components. Using this framework, sophisticated program analyzers can be assembled by non-experts, and the result are guaranteed to be verified by construction
    corecore