1,042 research outputs found

    A Rational Deconstruction of Landin's SECD Machine with the J Operator

    Full text link
    Landin's SECD machine was the first abstract machine for applicative expressions, i.e., functional programs. Landin's J operator was the first control operator for functional languages, and was specified by an extension of the SECD machine. We present a family of evaluation functions corresponding to this extension of the SECD machine, using a series of elementary transformations (transformation into continu-ation-passing style (CPS) and defunctionalization, chiefly) and their left inverses (transformation into direct style and refunctionalization). To this end, we modernize the SECD machine into a bisimilar one that operates in lockstep with the original one but that (1) does not use a data stack and (2) uses the caller-save rather than the callee-save convention for environments. We also identify that the dump component of the SECD machine is managed in a callee-save way. The caller-save counterpart of the modernized SECD machine precisely corresponds to Thielecke's double-barrelled continuations and to Felleisen's encoding of J in terms of call/cc. We then variously characterize the J operator in terms of CPS and in terms of delimited-control operators in the CPS hierarchy. As a byproduct, we also present several reduction semantics for applicative expressions with the J operator, based on Curien's original calculus of explicit substitutions. These reduction semantics mechanically correspond to the modernized versions of the SECD machine and to the best of our knowledge, they provide the first syntactic theories of applicative expressions with the J operator

    Type soundness proofs with definitional interpreters

    Get PDF
    While type soundness proofs are taught in every graduate PL class, the gap between realistic languages and what is accessible to formal proofs is large. In the case of Scala, it has been shown that its formal model, the Dependent Object Types (DOT) calculus, cannot simultaneously support key metatheoretic properties such as environment narrowing and subtyping transitivity, which are usually required for a type soundness proof. Moreover, Scala and many other realistic languages lack a general substitution property. The first contribution of this paper is to demonstrate how type soundness proofs for advanced, polymorphic, type systems can be carried out with an operational semantics based on high-level, definitional interpreters, implemented in Coq. We present the first mechanized soundness proofs in this style for System F<: and several extensions, including mutable references. Our proofs use only straightforward induction, which is significant, as the combination of big-step semantics, mutable references, and polymorphism is commonly believed to require coinductive proof techniques. The second main contribution of this paper is to show how DOT-like calculi emerge from straightforward generalizations of the operational aspects of F<:, exposing a rich design space of calculi with path-dependent types in between System F and DOT, which we dub the System D Square. By working directly on the target language, definitional interpreters can focus the design space and expose the invariants that actually matter at runtime. Looking at such runtime invariants is an exciting new avenue for type system design.This research was supported by NSF through awards 1553471 and 1564207

    Dependent Object Types

    Get PDF
    A scalable programming language is one in which the same concepts can describe small as well as large parts. Towards this goal, Scala unifies concepts from object and module systems. In particular, objects can contain type members, which can be selected as types, called path-dependent types. Focusing on path-dependent types, we develop a type-theoretic foundation for Scala: the calculus of Dependent Object Types (DOT). We derive DOT from System F, we add a lower bound to each type variable, in addition to its usual upper bound, (2) in System D, we turn each type variable into a regular term variable containing a type, (3) for a full subtyping lattice, we add intersection and union types, (4) for objects, we consolidate all values into records, (5) for objects that close over a self, we introduce a recursive type, binding a self term variable, (6) for recursive types, we first extend the theory in typing and then also in subtyping. Through this bottom-up exploration, we discover a sound, uniform yet powerful design for DOT. We devise strategies and techniques for proving soundness that scale through this iterative step-by-step process: (1) "pushback" of subtyping transitivity or subsumption, to concisely capture inversion of subtyping or typing, (2) distinction between concrete vs. abstract context variables, to resolve tension between preservation of types vs. preservation of type abstractions, (3) and, specifically for big-step semantics, a type that closes over an environment, to relate context-dependent types across closures. While ultimately, we have developed sound models of DOT in both big-step and small-step operational semantics, historically, the shift to big-step semantics has been helpful in focusing the requirements. In particular, by developing a novel big-step soundness proof for System F<:, calculi like System D<: emerge as straightforward generalizations, almost like removing artificial restrictions. Interesting in their own right, our type soundness techniques for definitional interpreters extend to mutable references without use of co-induction. The DOT calculus finally grounds languages like Scala in firm theory. The DOT calculus helps in finding bugs in Scala, and in understanding feature interaction better as well as requirements. The DOT calculus serves as a good basis for future work which studies extensions or encodings on top of the core, bridging the gap from DOT to Dotty / Scala

    On Computational Small Steps and Big Steps: Refocusing for Outermost Reduction

    Get PDF
    We study the relationship between small-step semantics, big-step semantics and abstract machines, for programming languages that employ an outermost reduction strategy, i.e., languages where reductions near the root of the abstract syntax tree are performed before reductions near the leaves.In particular, we investigate how Biernacka and Danvy's syntactic correspondence and Reynolds's functional correspondence can be applied to inter-derive semantic specifications for such languages.The main contribution of this dissertation is three-fold:First, we identify that backward overlapping reduction rules in the small-step semantics cause the refocusing step of the syntactic correspondence to be inapplicable.Second, we propose two solutions to overcome this in-applicability: backtracking and rule generalization.Third, we show how these solutions affect the other transformations of the two correspondences.Other contributions include the application of the syntactic and functional correspondences to Boolean normalization.In particular, we show how to systematically derive a spectrum of normalization functions for negational and conjunctive normalization

    Mechanizing Abstract Interpretation

    Get PDF
    It is important when developing software to verify the absence of undesirable behavior such as crashes, bugs and security vulnerabilities. Some settings require high assurance in verification results, e.g., for embedded software in automobiles or airplanes. To achieve high assurance in these verification results, formal methods are used to automatically construct or check proofs of their correctness. However, achieving high assurance for program analysis results is challenging, and current methods are ill suited for both complex critical domains and mainstream use. To verify the correctness of software we consider program analyzers---automated tools which detect software defects---and to achieve high assurance in verification results we consider mechanized verification---a rigorous process for establishing the correctness of program analyzers via computer-checked proofs. The key challenges to designing verified program analyzers are: (1) achieving an analyzer design for a given programming language and correctness property; (2) achieving an implementation for the design; and (3) achieving a mechanized verification that the implementation is correct w.r.t. the design. The state of the art in (1) and (2) is to use abstract interpretation: a guiding mathematical framework for systematically constructing analyzers directly from programming language semantics. However, achieving (3) in the presence of abstract interpretation has remained an open problem since the late 1990's. Furthermore, even the state-of-the art which achieves (3) in the absence of abstract interpretation suffers from the inability to be reused in the presence of new analyzer designs or programming language features. First, we solve the open problem which has prevented the combination of abstract interpretation (and in particular, calculational abstract interpretation) with mechanized verification, which advances the state of the art in designing, implementing, and verifying analyzers for critical software. We do this through a new mathematical framework Constructive Galois Connections which supports synthesizing specifications for program analyzers, calculating implementations from these induced specifications, and is amenable to mechanized verification. Finally, we introduce reusable components for implementing analyzers for a wide range of designs and semantics. We do this though two new frameworks Galois Transformers and Definitional Abstract Interpreters. These frameworks tightly couple analyzer design decisions, implementation fragments, and verification properties into compositional components which are (target) programming-language independent and amenable to mechanized verification. Variations in the analysis design are then recovered by simply re-assembling the combination of components. Using this framework, sophisticated program analyzers can be assembled by non-experts, and the result are guaranteed to be verified by construction

    Predicting and validating the load-settlement behavior of large-scale geosynthetic-reinforced soil abutments using hybrid intelligent modeling

    Get PDF
    Settlement prediction of geosynthetic-reinforced soil (GRS) abutments under service loading conditions is an arduous and challenging task for practicing geotechnical/civil engineers. Hence, in this paper, a novel hybrid artificial intelligence (AI)-based model was developed by the combination of artificial neural network (ANN) and Harris hawks’ optimisation (HHO), that is, ANN-HHO, to predict the settlement of the GRS abutments. Five other robust intelligent models such as support vector regression (SVR), Gaussian process regression (GPR), relevance vector machine (RVM), sequential minimal optimisation regression (SMOR), and least-median square regression (LMSR) were constructed and compared to the ANN-HHO model. The predictive strength, relalibility and robustness of the model were evaluated based on rigorous statistical testing, ranking criteria, multi-criteria approach, uncertainity analysis and sensitivity analysis (SA). Moreover, the predictive veracity of the model was also substantiated against several large-scale independent experimental studies on GRS abutments reported in the scientific literature. The acquired findings demonstrated that the ANN-HHO model predicted the settlement of GRS abutments with reasonable accuracy and yielded superior performance in comparison to counterpart models. Therefore, it becomes one of predictive tools employed by geotechnical/civil engineers in preliminary decision-making when investigating the in-service performance of GRS abutments. Finally, the model has been converted into a simple mathematical formulation for easy hand calculations, and it is proved cost-effective and less time-consuming in comparison to experimental tests and numerical simulations
    • …
    corecore