42 research outputs found

    Latte: Lightweight Aliasing Tracking for Java

    Full text link
    Many existing systems track aliasing and uniqueness, each with their own trade-off between expressiveness and developer effort. We propose Latte, a new approach that aims to minimize both the amount of annotations and the complexity of invariants necessary for reasoning about aliasing in an object-oriented language with mutation. Our approach only requires annotations for parameters and fields, while annotations for local variables are inferred. Furthermore, it relaxes uniqueness to allow aliasing among local variables, as long as this aliasing can be precisely determined. This enables support for destructive reads without changes to the language or its run-time semantics. Despite this simplicity, we show how this design can still be used for tracking uniqueness and aliasing in a local sequential setting, with practical applications, such as modeling a stack

    Flexible Object Layouts: enabling lightweight language extensions by intercepting slot access

    Get PDF
    International audienceProgramming idioms, design patterns and application li- braries often introduce cumbersome and repetitive boiler- plate code to a software system. Language extensions and external DSLs (domain specific languages) are sometimes introduced to reduce the need for boilerplate code, but they also complicate the system by introducing the need for lan- guage dialects and inter-language mediation. To address this, we propose to extend the structural reflec- tive model of the language with object layouts, layout scopes and slots. Based on the new reflective language model we can 1) provide behavioral hooks to object layouts that are triggered when the fields of an object are accessed and 2) simplify the implementation of state-related language exten- sions such as stateful traits. By doing this we show how many idiomatic use cases that normally require boilerplate code can be more effectively supported. We present an implementation in Smalltalk, and illustrate its usage through a series of extended examples

    Enabling Additional Parallelism in Asynchronous JavaScript Applications

    Get PDF
    JavaScript is a single-threaded programming language, so asynchronous programming is practiced out of necessity to ensure that applications remain responsive in the presence of user input or interactions with file systems and networks. However, many JavaScript applications execute in environments that do exhibit concurrency by, e.g., interacting with multiple or concurrent servers, or by using file systems managed by operating systems that support concurrent I/O. In this paper, we demonstrate that JavaScript programmers often schedule asynchronous I/O operations suboptimally, and that reordering such operations may yield significant performance benefits. Concretely, we define a static side-effect analysis that can be used to determine how asynchronous I/O operations can be refactored so that asynchronous I/O-related requests are made as early as possible, and so that the results of these requests are awaited as late as possible. While our static analysis is potentially unsound, we have not encountered any situations where it suggested reorderings that change program behavior. We evaluate the refactoring on 20 applications that perform file- or network-related I/O. For these applications, we observe average speedups ranging between 0.99% and 53.6% for the tests that execute refactored code (8.1% on average)

    Verification of Shared-Reading Synchronisers

    Get PDF
    Synchronisation classes are an important building block for shared memory concurrent programs. Thus to reason about such programs, it is important to be able to verify the implementation of these synchronisation classes, considering atomic operations as the synchronisation primitives on which the implementations are built. For synchronisation classes controlling exclusive access to a shared resource, such as locks, a technique has been proposed to reason about their behaviour. This paper proposes a technique to verify implementations of both exclusive access and shared-reading synchronisers. We use permission-based Separation Logic to describe the behaviour of the main atomic operations, and the basis for our technique is formed by a specification for class AtomicInteger, which is commonly used to implement synchronisation classes in java.util.concurrent. To demonstrate the applicability of our approach, we mechanically verify the implementation of various synchronisation classes like Semaphore, CountDownLatch and Lock.Comment: In Proceedings MeTRiD 2018, arXiv:1806.0933

    Session Types in a Linearly Typed Multi-Threaded Lambda-Calculus

    Full text link
    We present a formalization of session types in a multi-threaded lambda-calculus (MTLC) equipped with a linear type system, establishing for the MTLC both type preservation and global progress. The latter (global progress) implies that the evaluation of a well-typed program in the MTLC can never reach a deadlock. As this formulated MTLC can be readily embedded into ATS, a full-fledged language with a functional programming core that supports both dependent types (of DML-style) and linear types, we obtain a direct implementation of session types in ATS. In addition, we gain immediate support for a form of dependent session types based on this embedding into ATS. Compared to various existing formalizations of session types, we see the one given in this paper is unique in its closeness to concrete implementation. In particular, we report such an implementation ready for practical use that generates Erlang code from well-typed ATS source (making use of session types), thus taking great advantage of the infrastructural support for distributed computing in Erlang.Comment: This is the original version of the paper on supporting programming with dyadic session types in AT

    A Semantics for the Essence of React

    Get PDF

    Speculative Staging for Interpreter Optimization

    Full text link
    Interpreters have a bad reputation for having lower performance than just-in-time compilers. We present a new way of building high performance interpreters that is particularly effective for executing dynamically typed programming languages. The key idea is to combine speculative staging of optimized interpreter instructions with a novel technique of incrementally and iteratively concerting them at run-time. This paper introduces the concepts behind deriving optimized instructions from existing interpreter instructions---incrementally peeling off layers of complexity. When compiling the interpreter, these optimized derivatives will be compiled along with the original interpreter instructions. Therefore, our technique is portable by construction since it leverages the existing compiler's backend. At run-time we use instruction substitution from the interpreter's original and expensive instructions to optimized instruction derivatives to speed up execution. Our technique unites high performance with the simplicity and portability of interpreters---we report that our optimization makes the CPython interpreter up to more than four times faster, where our interpreter closes the gap between and sometimes even outperforms PyPy's just-in-time compiler.Comment: 16 pages, 4 figures, 3 tables. Uses CPython 3.2.3 and PyPy 1.

    Relating Functional and Imperative Session Types

    Full text link
    Imperative session types provide an imperative interface to session-typed communication. In such an interface, channel references are first-class objects with operations that change the typestate of the channel. Compared to functional session type APIs, the program structure is simpler at the surface, but typestate is required to model the current state of communication throughout. Following an early work that explored the imperative approach, a significant body of work on session types has neglected the imperative approach and opts for a functional approach that uses linear types to manage channel references soundly. We demonstrate that the functional approach subsumes the early work on imperative session types by exhibiting a typing and semantics preserving translation into a system of linear functional session types. We further show that the untyped backwards translation from the functional to the imperative calculus is semantics preserving. We restrict the type system of the functional calculus such that the backwards translation becomes type preserving. Thus, we precisely capture the difference in expressiveness of the two calculi and conclude that the lack of expressiveness in the imperative calculus is largely due to restrictions imposed by its type system.Comment: 39 pages, insubmissio

    Mechanized Verification of a Fine-Grained Concurrent Queue from Meta s Folly Library

    Get PDF
    We present the first formal specification and verification of the fine-grained concurrent multi-producer-multi-consumer queue algorithm from Meta's C++ library Folly of core infrastructure components. The queue is highly optimized, practical, and used by Meta in production where it scales to thousands of consumer and producer threads. We present an implementation of the algorithm in an ML-like language and formally prove that it is a contextual refinement of a simple coarse-grained queue (a property that implies that the MPMC queue is linearizable). We use the ReLoC relational logic and the Iris program logic to carry out the proof and to mechanize it in the Coq proof assistant. The MPMC queue is implemented using three modules, and our proof is similarly modular. By using ReLoC and Iris's support for modular reasoning we verify each module in isolation and compose these together. A key challenge of the MPMC queue is that it has a so-called external linearization point, which ReLoC has no support for reasoning about. Thus we extend ReLoC, both on paper and in Coq, with novel support for reasoning about external linearization points. </p
    corecore