128 research outputs found

    A Machine-Checked, Type-Safe Model of Java Concurrency : Language, Virtual Machine, Memory Model, and Verified Compiler

    Get PDF
    The Java programming language provides safety and security guarantees such as type safety and its security architecture. They distinguish it from other mainstream programming languages like C and C++. In this work, we develop a machine-checked model of concurrent Java and the Java memory model and investigate the impact of concurrency on these guarantees. From the formal model, we automatically obtain an executable verified compiler to bytecode and a validated virtual machine

    Domains: safe sharing among actors

    Get PDF
    The actor model is a concurrency model that avoids issues such as deadlocks and data races by construction, and thus facilitates concurrent programming. While it has mainly been used for expressing distributed computations, it is equally useful for modeling concurrent computations in a single shared memory machine. In component based software, the actor model lends itself to divide the components naturally over different actors and use message-passing concurrency for the interaction between these components. The tradeoff is that the actor model sacrifices expressiveness and efficiency with respect to parallel access to shared state. This paper gives an overview of the disadvantages of the actor model when trying to express shared state and then formulates an extension of the actor model to solve these issues. Our solution proposes domains and synchronization views to solve the issues without compromising on the semantic properties of the actor model. Thus, the resulting concurrency model maintains deadlock-freedom and avoids low-level data races

    Towards Implicit Parallel Programming for Systems

    Get PDF
    Multi-core processors require a program to be decomposable into independent parts that can execute in parallel in order to scale performance with the number of cores. But parallel programming is hard especially when the program requires state, which many system programs use for optimization, such as for example a cache to reduce disk I/O. Most prevalent parallel programming models do not support a notion of state and require the programmer to synchronize state access manually, i.e., outside the realms of an associated optimizing compiler. This prevents the compiler to introduce parallelism automatically and requires the programmer to optimize the program manually. In this dissertation, we propose a programming language/compiler co-design to provide a new programming model for implicit parallel programming with state and a compiler that can optimize the program for a parallel execution. We define the notion of a stateful function along with their composition and control structures. An example implementation of a highly scalable server shows that stateful functions smoothly integrate into existing programming language concepts, such as object-oriented programming and programming with structs. Our programming model is also highly practical and allows to gradually adapt existing code bases. As a case study, we implemented a new data processing core for the Hadoop Map/Reduce system to overcome existing performance bottlenecks. Our lambda-calculus-based compiler automatically extracts parallelism without changing the program's semantics. We added further domain-specific semantic-preserving transformations that reduce I/O calls for microservice programs. The runtime format of a program is a dataflow graph that can be executed in parallel, performs concurrent I/O and allows for non-blocking live updates

    Towards Implicit Parallel Programming for Systems

    Get PDF
    Multi-core processors require a program to be decomposable into independent parts that can execute in parallel in order to scale performance with the number of cores. But parallel programming is hard especially when the program requires state, which many system programs use for optimization, such as for example a cache to reduce disk I/O. Most prevalent parallel programming models do not support a notion of state and require the programmer to synchronize state access manually, i.e., outside the realms of an associated optimizing compiler. This prevents the compiler to introduce parallelism automatically and requires the programmer to optimize the program manually. In this dissertation, we propose a programming language/compiler co-design to provide a new programming model for implicit parallel programming with state and a compiler that can optimize the program for a parallel execution. We define the notion of a stateful function along with their composition and control structures. An example implementation of a highly scalable server shows that stateful functions smoothly integrate into existing programming language concepts, such as object-oriented programming and programming with structs. Our programming model is also highly practical and allows to gradually adapt existing code bases. As a case study, we implemented a new data processing core for the Hadoop Map/Reduce system to overcome existing performance bottlenecks. Our lambda-calculus-based compiler automatically extracts parallelism without changing the program's semantics. We added further domain-specific semantic-preserving transformations that reduce I/O calls for microservice programs. The runtime format of a program is a dataflow graph that can be executed in parallel, performs concurrent I/O and allows for non-blocking live updates

    Towards Implicit Parallel Programming for Systems

    Get PDF
    Multi-core processors require a program to be decomposable into independent parts that can execute in parallel in order to scale performance with the number of cores. But parallel programming is hard especially when the program requires state, which many system programs use for optimization, such as for example a cache to reduce disk I/O. Most prevalent parallel programming models do not support a notion of state and require the programmer to synchronize state access manually, i.e., outside the realms of an associated optimizing compiler. This prevents the compiler to introduce parallelism automatically and requires the programmer to optimize the program manually. In this dissertation, we propose a programming language/compiler co-design to provide a new programming model for implicit parallel programming with state and a compiler that can optimize the program for a parallel execution. We define the notion of a stateful function along with their composition and control structures. An example implementation of a highly scalable server shows that stateful functions smoothly integrate into existing programming language concepts, such as object-oriented programming and programming with structs. Our programming model is also highly practical and allows to gradually adapt existing code bases. As a case study, we implemented a new data processing core for the Hadoop Map/Reduce system to overcome existing performance bottlenecks. Our lambda-calculus-based compiler automatically extracts parallelism without changing the program's semantics. We added further domain-specific semantic-preserving transformations that reduce I/O calls for microservice programs. The runtime format of a program is a dataflow graph that can be executed in parallel, performs concurrent I/O and allows for non-blocking live updates

    Towards Implicit Parallel Programming for Systems

    Get PDF
    Multi-core processors require a program to be decomposable into independent parts that can execute in parallel in order to scale performance with the number of cores. But parallel programming is hard especially when the program requires state, which many system programs use for optimization, such as for example a cache to reduce disk I/O. Most prevalent parallel programming models do not support a notion of state and require the programmer to synchronize state access manually, i.e., outside the realms of an associated optimizing compiler. This prevents the compiler to introduce parallelism automatically and requires the programmer to optimize the program manually. In this dissertation, we propose a programming language/compiler co-design to provide a new programming model for implicit parallel programming with state and a compiler that can optimize the program for a parallel execution. We define the notion of a stateful function along with their composition and control structures. An example implementation of a highly scalable server shows that stateful functions smoothly integrate into existing programming language concepts, such as object-oriented programming and programming with structs. Our programming model is also highly practical and allows to gradually adapt existing code bases. As a case study, we implemented a new data processing core for the Hadoop Map/Reduce system to overcome existing performance bottlenecks. Our lambda-calculus-based compiler automatically extracts parallelism without changing the program's semantics. We added further domain-specific semantic-preserving transformations that reduce I/O calls for microservice programs. The runtime format of a program is a dataflow graph that can be executed in parallel, performs concurrent I/O and allows for non-blocking live updates

    Towards Implicit Parallel Programming for Systems

    Get PDF
    Multi-core processors require a program to be decomposable into independent parts that can execute in parallel in order to scale performance with the number of cores. But parallel programming is hard especially when the program requires state, which many system programs use for optimization, such as for example a cache to reduce disk I/O. Most prevalent parallel programming models do not support a notion of state and require the programmer to synchronize state access manually, i.e., outside the realms of an associated optimizing compiler. This prevents the compiler to introduce parallelism automatically and requires the programmer to optimize the program manually. In this dissertation, we propose a programming language/compiler co-design to provide a new programming model for implicit parallel programming with state and a compiler that can optimize the program for a parallel execution. We define the notion of a stateful function along with their composition and control structures. An example implementation of a highly scalable server shows that stateful functions smoothly integrate into existing programming language concepts, such as object-oriented programming and programming with structs. Our programming model is also highly practical and allows to gradually adapt existing code bases. As a case study, we implemented a new data processing core for the Hadoop Map/Reduce system to overcome existing performance bottlenecks. Our lambda-calculus-based compiler automatically extracts parallelism without changing the program's semantics. We added further domain-specific semantic-preserving transformations that reduce I/O calls for microservice programs. The runtime format of a program is a dataflow graph that can be executed in parallel, performs concurrent I/O and allows for non-blocking live updates

    Behavioural Types: from Theory to Tools

    Get PDF
    This book presents research produced by members of COST Action IC1201: Behavioural Types for Reliable Large-Scale Software Systems (BETTY), a European research network that was funded from October 2012 to October 2016. The technical theme of BETTY was the use of behavioural type systems in programming languages, to specify and verify properties of programs beyond the traditional use of type systems to describe data processing. A significant area within behavioural types is session types, which concerns the use of type-theoretic techniques to describe communication protocols so that static typechecking or dynamic monitoring can verify that protocols are implemented correctly. This is closely related to the topic of choreography, in which system design starts from a description of the overall communication flows. Another area is behavioural contracts, which describe the obligations of interacting agents in a way that enables blame to be attributed to the agent responsible for failed interaction. Type-theoretic techniques can also be used to analyse potential deadlocks due to cyclic dependencies between inter-process interactions. BETTY was organised into four Working Groups: (1) Foundations; (2) Security; (3) Programming Languages; (4) Tools and Applications. Working Groups 1–3 produced “state-of-the-art reports”, which originally intended to take snapshots of the field at the time the network started, but grew into substantial survey articles including much research carried out during the network [1–3]. The situation for Working Group 4 was different. When the network started, the community had produced relatively few implementations of programming languages or tools. One of the aims of the network was to encourage more implementation work, and this was a great success. The community as a whole has developed a greater interest in putting theoretical ideas into practice. The sixteen chapters in this book describe systems that were either completely developed, or substantially extended, during BETTY. The total of 41 co-authors represents a significant proportion of the active participants in the network (around 120 people who attended at least one meeting). The book is a report on the new state of the art created by BETTY in xv xvi Preface the area of Working Group 4, and the title “Behavioural Types: from Theory to Tools” summarises the trajectory of the community during the last four years. The book begins with two tutorials by Atzei et al. on contract-oriented design of distributed systems. Chapter 1 introduces the CO2 contract specifi- cation language and the Diogenes toolchain. Chapter 2 describes how timing constraints can be incorporated into the framework and checked with the CO2 middleware. Part of the CO2 middleware is a monitoring system, and the theme of monitoring continues in the next two chapters. In Chapter 3, Attard et al. present detectEr, a runtime monitoring tool for Erlang programs that allows correctness properties to be expressed in Hennessy-Milner logic. In Chapter 4, which is the first chapter about session types, Neykova and Yoshida describe a runtime verification framework for Python programs. Communication protocols are specified in the Scribble language, which is based on multiparty session types. The next three chapters deal with choreographic programming. In Chap- ter 5, Debois and Hildebrandt present a toolset for working with dynamic condition response (DCR) graphs, which are a graphical formalism for choreography. Chapter 6, by Lange et al., continues the graphical theme with ChorGram, a tool for synthesising global graphical choreographies from collections of communicating finite-state automata. Giallorenzo et al., in Chapter 7, consider runtime adaptation. They describe AIOCJ, a choreographic programming language in which runtime adaptation is supported with a guarantee that it doesn’t introduce deadlocks or races. Deadlock analysis is important in other settings too, and there are two more chapters about it. In Chapter 8, Padovani describes the Hypha tool, which uses a type-based approach to check deadlock-freedom and lock-freedom of systems modelled in a form of pi-calculus. In Chapter 9, Garcia and Laneve present a tool for analysing deadlocks in Java programs; this tool, called JaDA, is based on a behavioural type system. The next three chapters report on projects that have added session types to functional programming languages in order to support typechecking of communication-based code. In Chapter 10, Orchard and Yoshida describe an implementation of session types in Haskell, and survey several approaches to typechecking the linearity conditions required for safe session implemen- tation. In Chapter 11, Melgratti and Padovani describe an implementation of session types in OCaml. Their system uses runtime linearity checking. In Chapter 12, Lindley and Morris describe an extension of the web programming language Links with session types; their work contrasts with the previous two chapters in being less constrained by an existing language design. Continuing the theme of session types in programming languages, the next two chapters describe two approaches based on Java. Hu’s work, presented in Chapter 13, starts with the Scribble description of a multiparty session type and generates an API in the form of a collection of Java classes, each class containing the communication methods that are available in a particular state of the protocol. Dardha et al., in Chapter 14, also start with a Scribble specification. Their StMungo tool generates an API as a single class with an associated typestate specification to constrain sequences of method calls. Code that uses the API can be checked for correctness with the Mungo typechecker. Finally, there are two chapters about programming with the MPI libraries. Chapter 15, by Ng and Yoshida, uses an extension of Scribble, called Pabble, to describe protocols that parametric in the number of runtime roles. From a Pabble specification they generate C code that uses MPI for communication and is guaranteed correct by construction. Chapter 16, by Ng et al., describes the ParTypes framework for analysing existing C+MPI programs with respect to protocols defined in an extension of Scribble. We hope that the book will serve a useful purpose as a report on the activities of COST Action IC1201 and as a survey of programming languages and tools based on behavioural types

    Behavioural Types: from Theory to Tools

    Get PDF
    This book presents research produced by members of COST Action IC1201: Behavioural Types for Reliable Large-Scale Software Systems (BETTY), a European research network that was funded from October 2012 to October 2016. The technical theme of BETTY was the use of behavioural type systems in programming languages, to specify and verify properties of programs beyond the traditional use of type systems to describe data processing. A significant area within behavioural types is session types, which concerns the use of type-theoretic techniques to describe communication protocols so that static typechecking or dynamic monitoring can verify that protocols are implemented correctly. This is closely related to the topic of choreography, in which system design starts from a description of the overall communication flows. Another area is behavioural contracts, which describe the obligations of interacting agents in a way that enables blame to be attributed to the agent responsible for failed interaction. Type-theoretic techniques can also be used to analyse potential deadlocks due to cyclic dependencies between inter-process interactions. BETTY was organised into four Working Groups: (1) Foundations; (2) Security; (3) Programming Languages; (4) Tools and Applications. Working Groups 1–3 produced “state-of-the-art reports”, which originally intended to take snapshots of the field at the time the network started, but grew into substantial survey articles including much research carried out during the network [1–3]. The situation for Working Group 4 was different. When the network started, the community had produced relatively few implementations of programming languages or tools. One of the aims of the network was to encourage more implementation work, and this was a great success. The community as a whole has developed a greater interest in putting theoretical ideas into practice. The sixteen chapters in this book describe systems that were either completely developed, or substantially extended, during BETTY. The total of 41 co-authors represents a significant proportion of the active participants in the network (around 120 people who attended at least one meeting). The book is a report on the new state of the art created by BETTY in xv xvi Preface the area of Working Group 4, and the title “Behavioural Types: from Theory to Tools” summarises the trajectory of the community during the last four years. The book begins with two tutorials by Atzei et al. on contract-oriented design of distributed systems. Chapter 1 introduces the CO2 contract specifi- cation language and the Diogenes toolchain. Chapter 2 describes how timing constraints can be incorporated into the framework and checked with the CO2 middleware. Part of the CO2 middleware is a monitoring system, and the theme of monitoring continues in the next two chapters. In Chapter 3, Attard et al. present detectEr, a runtime monitoring tool for Erlang programs that allows correctness properties to be expressed in Hennessy-Milner logic. In Chapter 4, which is the first chapter about session types, Neykova and Yoshida describe a runtime verification framework for Python programs. Communication protocols are specified in the Scribble language, which is based on multiparty session types. The next three chapters deal with choreographic programming. In Chap- ter 5, Debois and Hildebrandt present a toolset for working with dynamic condition response (DCR) graphs, which are a graphical formalism for choreography. Chapter 6, by Lange et al., continues the graphical theme with ChorGram, a tool for synthesising global graphical choreographies from collections of communicating finite-state automata. Giallorenzo et al., in Chapter 7, consider runtime adaptation. They describe AIOCJ, a choreographic programming language in which runtime adaptation is supported with a guarantee that it doesn’t introduce deadlocks or races. Deadlock analysis is important in other settings too, and there are two more chapters about it. In Chapter 8, Padovani describes the Hypha tool, which uses a type-based approach to check deadlock-freedom and lock-freedom of systems modelled in a form of pi-calculus. In Chapter 9, Garcia and Laneve present a tool for analysing deadlocks in Java programs; this tool, called JaDA, is based on a behavioural type system. The next three chapters report on projects that have added session types to functional programming languages in order to support typechecking of communication-based code. In Chapter 10, Orchard and Yoshida describe an implementation of session types in Haskell, and survey several approaches to typechecking the linearity conditions required for safe session implemen- tation. In Chapter 11, Melgratti and Padovani describe an implementation of session types in OCaml. Their system uses runtime linearity checking. In Chapter 12, Lindley and Morris describe an extension of the web programming language Links with session types; their work contrasts with the previous two chapters in being less constrained by an existing language design. Continuing the theme of session types in programming languages, the next two chapters describe two approaches based on Java. Hu’s work, presented in Chapter 13, starts with the Scribble description of a multiparty session type and generates an API in the form of a collection of Java classes, each class containing the communication methods that are available in a particular state of the protocol. Dardha et al., in Chapter 14, also start with a Scribble specification. Their StMungo tool generates an API as a single class with an associated typestate specification to constrain sequences of method calls. Code that uses the API can be checked for correctness with the Mungo typechecker. Finally, there are two chapters about programming with the MPI libraries. Chapter 15, by Ng and Yoshida, uses an extension of Scribble, called Pabble, to describe protocols that parametric in the number of runtime roles. From a Pabble specification they generate C code that uses MPI for communication and is guaranteed correct by construction. Chapter 16, by Ng et al., describes the ParTypes framework for analysing existing C+MPI programs with respect to protocols defined in an extension of Scribble. We hope that the book will serve a useful purpose as a report on the activities of COST Action IC1201 and as a survey of programming languages and tools based on behavioural types
    • …
    corecore