219,306 research outputs found

    Abstract Machines for Open Call-by-Value

    Get PDF
    International audienceThe theory of the call-by-value λ-calculus relies on weak evaluation and closed terms, that are natural hypotheses in the study of programming languages. To model proof assistants, however, strong evaluation and open terms are required. Open call-by-value is the intermediate setting of weak evaluation with (possibly) open terms, on top of which Grégoire and Leroy designed one of the abstract machines of Coq. This paper provides a theory of abstract machines for the fireball calculus, the simplest presentation of open call-by-value. The literature contains machines that are either simple but inefficient, as they have an exponential overhead, or efficient but heavy, as they rely on a labeling of environments and a technical optimization. We introduce a machine that is simple and efficient: it does not use labels and it implements the fireball calculus within a bilinear overhead. Moreover, we provide a new fine understanding of how different optimizations impact on the complexity of the overhead, and evidence that the time cost model we work with is minimal

    On the Relative Usefulness of Fireballs

    Get PDF
    In CSL-LICS 2014, Accattoli and Dal Lago showed that there is an implementation of the ordinary (i.e. strong, pure, call-by-name) λ\lambda-calculus into models like RAM machines which is polynomial in the number of β\beta-steps, answering a long-standing question. The key ingredient was the use of a calculus with useful sharing, a new notion whose complexity was shown to be polynomial, but whose implementation was not explored. This paper, meant to be complementary, studies useful sharing in a call-by-value scenario and from a practical point of view. We introduce the Fireball Calculus, a natural extension of call-by-value to open terms for which the problem is as hard as for the ordinary lambda-calculus. We present three results. First, we adapt the solution of Accattoli and Dal Lago, improving the meta-theory of useful sharing. Then, we refine the picture by introducing the GLAMoUr, a simple abstract machine implementing the Fireball Calculus extended with useful sharing. Its key feature is that usefulness of a step is tested---surprisingly---in constant time. Third, we provide a further optimization that leads to an implementation having only a linear overhead with respect to the number of β\beta-steps.Comment: Technical report for the LICS 2015 submission with the same titl

    Modelling and Refinement in CODA

    Full text link
    This paper provides an overview of the CODA framework for modelling and refinement of component-based embedded systems. CODA is an extension of Event-B and UML-B and is supported by a plug-in for the Rodin toolset. CODA augments Event-B with constructs for component-based modelling including components, communications ports, port connectors, timed communications and timing triggers. Component behaviour is specified through a combination of UML-B state machines and Event-B. CODA communications and timing are given an Event-B semantics through translation rules. Refinement is based on Event-B refinement and allows layered construction of CODA models in a consistent way.Comment: In Proceedings Refine 2013, arXiv:1305.563

    Classical logic, continuation semantics and abstract machines

    Get PDF
    One of the goals of this paper is to demonstrate that denotational semantics is useful for operational issues like implementation of functional languages by abstract machines. This is exemplified in a tutorial way by studying the case of extensional untyped call-by-name λ-calculus with Felleisen's control operator 𝒞. We derive the transition rules for an abstract machine from a continuation semantics which appears as a generalization of the ¬¬-translation known from logic. The resulting abstract machine appears as an extension of Krivine's machine implementing head reduction. Though the result, namely Krivine's machine, is well known our method of deriving it from continuation semantics is new and applicable to other languages (as e.g. call-by-value variants). Further new results are that Scott's D∞-models are all instances of continuation models. Moreover, we extend our continuation semantics to Parigot's λμ-calculus from which we derive an extension of Krivine's machine for λμ-calculus. The relation between continuation semantics and the abstract machines is made precise by proving computational adequacy results employing an elegant method introduced by Pitts

    Competitive-Ratio Approximation Schemes for Minimizing the Makespan in the Online-List Model

    Full text link
    We consider online scheduling on multiple machines for jobs arriving one-by-one with the objective of minimizing the makespan. For any number of identical parallel or uniformly related machines, we provide a competitive-ratio approximation scheme that computes an online algorithm whose competitive ratio is arbitrarily close to the best possible competitive ratio. We also determine this value up to any desired accuracy. This is the first application of competitive-ratio approximation schemes in the online-list model. The result proves the applicability of the concept in different online models. We expect that it fosters further research on other online problems

    (Leftmost-Outermost) Beta Reduction is Invariant, Indeed

    Get PDF
    Slot and van Emde Boas' weak invariance thesis states that reasonable machines can simulate each other within a polynomially overhead in time. Is lambda-calculus a reasonable machine? Is there a way to measure the computational complexity of a lambda-term? This paper presents the first complete positive answer to this long-standing problem. Moreover, our answer is completely machine-independent and based over a standard notion in the theory of lambda-calculus: the length of a leftmost-outermost derivation to normal form is an invariant cost model. Such a theorem cannot be proved by directly relating lambda-calculus with Turing machines or random access machines, because of the size explosion problem: there are terms that in a linear number of steps produce an exponentially long output. The first step towards the solution is to shift to a notion of evaluation for which the length and the size of the output are linearly related. This is done by adopting the linear substitution calculus (LSC), a calculus of explicit substitutions modeled after linear logic proof nets and admitting a decomposition of leftmost-outermost derivations with the desired property. Thus, the LSC is invariant with respect to, say, random access machines. The second step is to show that LSC is invariant with respect to the lambda-calculus. The size explosion problem seems to imply that this is not possible: having the same notions of normal form, evaluation in the LSC is exponentially longer than in the lambda-calculus. We solve such an impasse by introducing a new form of shared normal form and shared reduction, deemed useful. Useful evaluation avoids those steps that only unshare the output without contributing to beta-redexes, i.e. the steps that cause the blow-up in size. The main technical contribution of the paper is indeed the definition of useful reductions and the thorough analysis of their properties.Comment: arXiv admin note: substantial text overlap with arXiv:1405.331
    • …
    corecore