28 research outputs found

    Derivation of a Virtual Machine For Four Variants of Delimited-Control Operators

    Get PDF

    The Weak Call-By-Value {\lambda}-Calculus is Reasonable for Both Time and Space

    Full text link
    We study the weak call-by-value λ\lambda-calculus as a model for computational complexity theory and establish the natural measures for time and space -- the number of beta-reductions and the size of the largest term in a computation -- as reasonable measures with respect to the invariance thesis of Slot and van Emde Boas [STOC~84]. More precisely, we show that, using those measures, Turing machines and the weak call-by-value λ\lambda-calculus can simulate each other within a polynomial overhead in time and a constant factor overhead in space for all computations that terminate in (encodings) of 'true' or 'false'. We consider this result as a solution to the long-standing open problem, explicitly posed by Accattoli [ENTCS~18], of whether the natural measures for time and space of the λ\lambda-calculus are reasonable, at least in case of weak call-by-value evaluation. Our proof relies on a hybrid of two simulation strategies of reductions in the weak call-by-value λ\lambda-calculus by Turing machines, both of which are insufficient if taken alone. The first strategy is the most naive one in the sense that a reduction sequence is simulated precisely as given by the reduction rules; in particular, all substitutions are executed immediately. This simulation runs within a constant overhead in space, but the overhead in time might be exponential. The second strategy is heap-based and relies on structure sharing, similar to existing compilers of eager functional languages. This strategy only has a polynomial overhead in time, but the space consumption might require an additional factor of logn\log n, which is essentially due to the size of the pointers required for this strategy. Our main contribution is the construction and verification of a space-aware interleaving of the two strategies, which is shown to yield both a constant overhead in space and a polynomial overhead in time

    Useful Open Call-By-Need

    Get PDF
    This paper studies useful sharing, which is a sophisticated optimization for ?-calculi, in the context of call-by-need evaluation in presence of open terms. Useful sharing turns out to be harder in call-by-need than in call-by-name or call-by-value, because call-by-need evaluates inside environments, making it harder to specify when a substitution step is useful. We isolate the key involved concepts and prove the correctness and the completeness of useful sharing in this setting

    Mechanising Complexity Theory: The Cook-Levin Theorem in Coq

    Get PDF

    A Type-Directed Operational Semantics For a Calculus with a Merge Operator

    Get PDF

    A Mechanised Proof of the Time Invariance Thesis for the Weak Call-By-Value ?-Calculus

    Get PDF
    The weak call-by-value ?-calculus ?and Turing machines can simulate each other with a polynomial overhead in time. This time invariance thesis for L, where the number of ?-reductions of a computation is taken as its time complexity, is the culmination of a 25-years line of research, combining work by Blelloch, Greiner, Dal Lago, Martini, Accattoli, Forster, Kunze, Roth, and Smolka. The present paper presents a mechanised proof of the time invariance thesis for L, constituting the first mechanised equivalence proof between two standard models of computation covering time complexity. The mechanisation builds on an existing framework for the extraction of Coq functions to L and contributes a novel Hoare logic framework for the verification of Turing machines. The mechanised proof of the time invariance thesis establishes ?as model for future developments of mechanised computational complexity theory regarding time. It can also be seen as a non-trivial but elementary case study of time-complexity-preserving translations between a functional language and a sequential machine model. As a by-product, we obtain a mechanised many-one equivalence proof of the halting problems for ?and Turing machines, which we contribute to the Coq Library of Undecidability Proofs

    On the Multi-Language Construction

    Get PDF
    Modern software is no more developed in a single programming language. Instead, programmers tend to exploit cross-language interoperability mechanisms to combine code stemming from different languages, and thus yielding fully-fledged multi-language programs. Whilst this approach enables developers to benefit from the strengths of each single-language, on the other hand it complicates the semantics of such programs. Indeed, the resulting multi-language does not meet any of the semantics of the combined languages. In this paper, we broaden the boundary functions-based approach a la Matthews and Findler to propose an algebraic framework that provides a constructive mathematical notion of multi-language able to determine its semantics. The aim of this work is to overcome the lack of a formal method (resp., model) to design (resp., represent) a multi-language, regardless of the inherent nature of the underlying languages. We show that our construction ensures the uniqueness of the semantic function (i.e., the multi-language semantics induced by the combined languages) by proving the initiality of the term model (i.e., the abstract syntax of the multi-language) in its category

    Analysing Parallel Complexity of Term Rewriting

    Full text link
    We revisit parallel-innermost term rewriting as a model of parallel computation on inductive data structures and provide a corresponding notion of runtime complexity parametric in the size of the start term. We propose automatic techniques to derive both upper and lower bounds on parallel complexity of rewriting that enable a direct reuse of existing techniques for sequential complexity. The applicability and the precision of the method are demonstrated by the relatively light effort in extending the program analysis tool AProVE and by experiments on numerous benchmarks from the literature.Comment: Extended authors' accepted manuscript for a paper accepted for publication in the Proceedings of the 32nd International Symposium on Logic-based Program Synthesis and Transformation (LOPSTR 2022). 27 page

    Trust, but Verify: Two-Phase Typing for Dynamic Languages

    Get PDF
    A key challenge when statically typing so-called dynamic languages is the ubiquity of value-based overloading, where a given function can dynamically reflect upon and behave according to the types of its arguments. Thus, to establish basic types, the analysis must reason precisely about values, but in the presence of higher-order functions and polymorphism, this reasoning itself can require basic types. In this paper we address this chicken-and-egg problem by introducing the framework of two-phased typing. The first "trust" phase performs classical, i.e. flow-, path- and value-insensitive type checking to assign basic types to various program expressions. When the check inevitably runs into "errors" due to value-insensitivity, it wraps problematic expressions with DEAD-casts, which explicate the trust obligations that must be discharged by the second phase. The second phase uses refinement typing, a flow- and path-sensitive analysis, that decorates the first phase's types with logical predicates to track value relationships and thereby verify the casts and establish other correctness properties for dynamically typed languages

    The (In)Efficiency of interaction

    Get PDF
    Evaluating higher-order functional programs through abstract machines inspired by the geometry of the interaction is known to induce space efficiencies, the price being time performances often poorer than those obtainable with traditional, environment-based, abstract machines. Although families of lambda-terms for which the former is exponentially less efficient than the latter do exist, it is currently unknown how general this phenomenon is, and how far the inefficiencies can go, in the worst case. We answer these questions formulating four different well-known abstract machines inside a common definitional framework, this way being able to give sharp results about the relative time efficiencies. We also prove that non-idempotent intersection type theories are able to precisely reflect the time performances of the interactive abstract machine, this way showing that its time-inefficiency ultimately descends from the presence of higher-order types
    corecore