34 research outputs found

    A dependently typed multi-stage calculus

    Get PDF
    Programming Languages and Systems: 17th Asian Symposium, APLAS 2019, Nusa Dua, Bali, Indonesia, December 1–4, 2019. Part of the Lecture Notes in Computer Science book series (LNCS, volume 11893). Also part of the Programming and Software Engineering book sub series (LNPSE, volume 11893).We study a dependently typed extension of a multi-stage programming language Ă  la MetaOCaml, which supports quasi-quotation and cross-stage persistence for manipulation of code fragments as first-class values and an evaluation construct for execution of programs dynamically generated by this code manipulation. Dependent types are expected to bring to multi-stage programming enforcement of strong invariant—beyond simple type safety—on the behavior of dynamically generated code. An extension is, however, not trivial because such a type system would have to take stages of types—roughly speaking, the number of surrounding quotations—into account. To rigorously study properties of such an extension, we develop λMD, which is an extension of Hanada and Igarashi’s typed calculus λâ–č% with dependent types, and prove its properties including preservation, confluence, strong normalization for full reduction, and progress for staged reduction. Motivated by code generators that generate code whose type depends on a value from outside of the quotations, we argue the significance of cross-stage persistence in dependently typed multi-stage programming and certain type equivalences that are not directly derived from reduction rules

    Novice Type Error Diagnosis with Natural Language Models

    Full text link
    Strong static type systems help programmers eliminate many errors without much burden of supplying type annotations. However, this flexibility makes it highly non-trivial to diagnose ill-typed programs, especially for novice programmers. Compared to classic constraint solving and optimization-based approaches, the data-driven approach has shown great promise in identifying the root causes of type errors with higher accuracy. Instead of relying on hand-engineered features, this work explores natural language models for type error localization, which can be trained in an end-to-end fashion without requiring any features. We demonstrate that, for novice type error diagnosis, the language model-based approach significantly outperforms the previous state-of-the-art data-driven approach. Specifically, our model could predict type errors correctly 62% of the time, outperforming the state-of-the-art Nate's data-driven model by 11%, in a more rigorous accuracy metric. Furthermore, we also apply structural probes to explain the performance difference between different language models.Comment: 17 pages, 8 figure

    Scather: programming with multi-party computation and MapReduce

    Full text link
    We present a prototype of a distributed computational infrastructure, an associated high level programming language, and an underlying formal framework that allow multiple parties to leverage their own cloud-based computational resources (capable of supporting MapReduce [27] operations) in concert with multi-party computation (MPC) to execute statistical analysis algorithms that have privacy-preserving properties. Our architecture allows a data analyst unfamiliar with MPC to: (1) author an analysis algorithm that is agnostic with regard to data privacy policies, (2) to use an automated process to derive algorithm implementation variants that have different privacy and performance properties, and (3) to compile those implementation variants so that they can be deployed on an infrastructures that allows computations to take place locally within each participant’s MapReduce cluster as well as across all the participants’ clusters using an MPC protocol. We describe implementation details of the architecture, discuss and demonstrate how the formal framework enables the exploration of tradeoffs between the efficiency and privacy properties of an analysis algorithm, and present two example applications that illustrate how such an infrastructure can be utilized in practice.This work was supported in part by NSF Grants: #1430145, #1414119, #1347522, and #1012798

    Hatching Compositions of Low-code Templates

    Get PDF
    Funding Information: Acknowledgements. Partially supported by grant Lisboa-01-0247-Feder-045917. Publisher Copyright: © 2022 ACM.Low-code frameworks strive to simplify and speed-up application development. Native support for the reuse and composition of parameterised coarse-grain components (templates) is essential to achieve these goals. OSTRICH-a rich template language for the OutSystems platform-was designed to simplify the use and creation of such templates. However, without a built-in composition mechanism, OSTRICH templates are hard to create and maintain. This paper presents a template composition mechanism and its typing and instantiation algorithms for model-driven low-code development environments. We evolve OSTRICH to support nested templates and allow the instantiation (hatching) of templates in the definition of other templates. Thus, we observe a significant increase code reuse potential, leading to a safer evolution of applications. The present definition seamlessly extends the existing Out-Systems metamodel with template constructs expressed by model annotations that maintain backward compatibility with the existing language toolchain. We present the metamodel, its annotations, and the corresponding validation and instantiation algorithms. In particular, we introduce a type-based validation procedure that ensures that using a template inside a template produces valid models. The work is validated using the OSTRICH benchmark. Our prototype is an extension of the OutSystems IDE allowing the annotation of models and their use to produce new models. We also analyse which existing OutSystems sample screens templates can be improved by using and sharing nested templates.publishe

    Staged Compilation with Two-Level Type Theory

    Full text link
    The aim of staged compilation is to enable metaprogramming in a way such that we have guarantees about the well-formedness of code output, and we can also mix together object-level and meta-level code in a concise and convenient manner. In this work, we observe that two-level type theory (2LTT), a system originally devised for the purpose of developing synthetic homotopy theory, also serves as a system for staged compilation with dependent types. 2LTT has numerous good properties for this use case: it has a concise specification, well-behaved model theory, and it supports a wide range of language features both at the object and the meta level. First, we give an overview of 2LTT's features and applications in staging. Then, we present a staging algorithm and prove its correctness. Our algorithm is "staging-by-evaluation", analogously to the technique of normalization-by-evaluation, in that staging is given by the evaluation of 2LTT syntax in a semantic domain. The staging algorithm together with its correctness constitutes a proof of strong conservativity of 2LLT over the object theory. To our knowledge, this is the first description of staged compilation which supports full dependent types and unrestricted staging for types

    Modular implicits

    Get PDF
    We present modular implicits, an extension to the OCaml language for ad-hoc polymorphism inspired by Scala implicits and modular type classes. Modular implicits are based on type-directed implicit module parameters, and elaborate straightforwardly into OCaml's first-class functors. Basing the design on OCaml's modules leads to a system that naturally supports many features from other languages with systematic ad-hoc overloading, including inheritance, instance constraints, constructor classes and associated types
    corecore