138 research outputs found

    Transient Typechecks are (Almost) Free

    Get PDF
    Transient gradual typing imposes run-time type tests that typically cause a linear slowdown in programs’ performance. This performance impact discourages the use of type annotations because adding types to a program makes the program slower. A virtual machine can employ standard justin-time optimizations to reduce the overhead of transient checks to near zero. These optimizations can give gradually-typed languages performance comparable to state-of-the-art dynamic languages, so programmers can add types to their code without affecting their programs’ performance

    ECROs: Building global scale systems from sequential code

    Get PDF
    Funding Information: We would like to thank Matteo Marra, Jim Bauwens, and the anonymous reviewers for their comments which helped improve the paper. Kevin De Porre is funded by an SB Fellowship of the Research Foundation - Flanders. Project number: 1S98519N. This work was partially supported by Fundação para a CiĂȘncia e a Tecnologia - Portugal (FCT/MCTES) under grants UIDB/04516/2020, PTDC/CCI-INF/32081/2017, and LISBOA-01-0145-FEDER-032662/PTDC/CCI-INF/32662/2017.To ease the development of geo-distributed applications, replicated data types (RDTs) offer a familiar programming interface while ensuring state convergence, low latency, and high availability. However, RDTs are still designed exclusively by experts using ad-hoc solutions that are error-prone and result in brittle systems. Recent works statically detect conflicting operations on existing data types and coordinate those at runtime to guarantee convergence and preserve application invariants. However, these approaches are too conservative, imposing coordination on a large number of operations. In this work, we propose a principled approach to design and implement efficient RDTs taking into account application invariants. Developers extend sequential data types with a distributed specification, which together form an RDT. We statically analyze the specification to detect conflicts and unravel their cause. This information is then used at runtime to serialize concurrent operations safely and efficiently. Our approach derives a correct RDT from any sequential data type without changes to the data type's implementation and with minimal coordination. We implement our approach in Scala and develop an extensive portfolio of RDTs. The evaluation shows that our approach provides performance similar to conflict-free replicated data types for commutative operations, and considerably improves the performance of non-commutative operations, compared to existing solutions.publishersversionpublishe

    Compass: {S}trong and Compositional Library Specifications in Relaxed Memory Separation Logic

    Get PDF

    Multitudes of Objects: First Implementation and Case Study for Java

    Get PDF
    In object-oriented programs, the relationship of an object to many objects is usually implemented using indirection through a collection. This is in contrast to a relationship to one object, which is usually implemented directly. However, using collections for relationships to many objects does not only mean that accessing the related objects always requires accessing the collection first, it also presents a lurking maintenance problem that manifests itself when a relationship needs to be changed from to-one to to-many or vice versa. Continuing our prior work on fixing this problem, we show how we have extended the Java 7 programming language with multiplicities, that is, with expressions that evaluate to a number of objects not wrapped in a container, and report on the experience we have gathered using these multiplicities in a case study

    Which of My Transient Type Checks Are Not (Almost) Free?

    Get PDF
    One form of type checking used in gradually typed language is transient type checking: whenever an object ‘flows’ through code with a type annotation, the object is dynamically checked to ensure it has the methods required by the annotation. Just-in-time compilation and optimisation in virtual machines can eliminate much of the overhead of run-time transient type checks. Unfortunately this optimisation is not uniform: some type checks will significantly decrease, or even increase, a program’s performance. In this paper, we refine the so called “Takikawa” protocol, and use it to identify which type annotations have the greatest effects on performance. In particular, we show how graphing the performance of such benchmarks when varying which type annotations are present in the source code can be used to discern potential patterns in performance. We demonstrate our approach by testing the Moth virtual machine: for many of the benchmarks where Moth’s transient type checking impacts performance, we have been able to identify one or two specific type annotations that are the likely cause. Without these type annotations, the performance impact of transient type checking becomes negligible. Using our technique programmers can optimise programs by removing expensive type checks, and VM engineers can identify new opportunities for compiler optimisation

    Towards Zero-Overhead Disambiguation of Deep Priority Conflicts

    Full text link
    **Context** Context-free grammars are widely used for language prototyping and implementation. They allow formalizing the syntax of domain-specific or general-purpose programming languages concisely and declaratively. However, the natural and concise way of writing a context-free grammar is often ambiguous. Therefore, grammar formalisms support extensions in the form of *declarative disambiguation rules* to specify operator precedence and associativity, solving ambiguities that are caused by the subset of the grammar that corresponds to expressions. **Inquiry** Implementing support for declarative disambiguation within a parser typically comes with one or more of the following limitations in practice: a lack of parsing performance, or a lack of modularity (i.e., disallowing the composition of grammar fragments of potentially different languages). The latter subject is generally addressed by scannerless generalized parsers. We aim to equip scannerless generalized parsers with novel disambiguation methods that are inherently performant, without compromising the concerns of modularity and language composition. **Approach** In this paper, we present a novel low-overhead implementation technique for disambiguating deep associativity and priority conflicts in scannerless generalized parsers with lightweight data-dependency. **Knowledge** Ambiguities with respect to operator precedence and associativity arise from combining the various operators of a language. While *shallow conflicts* can be resolved efficiently by one-level tree patterns, *deep conflicts* require more elaborate techniques, because they can occur arbitrarily nested in a tree. Current state-of-the-art approaches to solving deep priority conflicts come with a severe performance overhead. **Grounding** We evaluated our new approach against state-of-the-art declarative disambiguation mechanisms. By parsing a corpus of popular open-source repositories written in Java and OCaml, we found that our approach yields speedups of up to 1.73x over a grammar rewriting technique when parsing programs with deep priority conflicts--with a modest overhead of 1-2 % when parsing programs without deep conflicts. **Importance** A recent empirical study shows that deep priority conflicts are indeed wide-spread in real-world programs. The study shows that in a corpus of popular OCaml projects on Github, up to 17 % of the source files contain deep priority conflicts. However, there is no solution in the literature that addresses efficient disambiguation of deep priority conflicts, with support for modular and composable syntax definitions
    • 

    corecore