174 research outputs found

    The parallel event loop model and runtime: a parallel programming model and runtime system for safe event-based parallel programming

    Get PDF
    Recent trends in programming models for server-side development have shown an increasing popularity of event-based single- threaded programming models based on the combination of dynamic languages such as JavaScript and event-based runtime systems for asynchronous I/O management such as Node.JS. Reasons for the success of such models are the simplicity of the single-threaded event-based programming model as well as the growing popularity of the Cloud as a deployment platform for Web applications. Unfortunately, the popularity of single-threaded models comes at the price of performance and scalability, as single-threaded event-based models present limitations when parallel processing is needed, and traditional approaches to concurrency such as threads and locks don't play well with event-based systems. This dissertation proposes a programming model and a runtime system to overcome such limitations by enabling single-threaded event-based applications with support for speculative parallel execution. The model, called Parallel Event Loop, has the goal of bringing parallel execution to the domain of single-threaded event-based programming without relaxing the main characteristics of the single-threaded model, and therefore providing developers with the impression of a safe, single-threaded, runtime. Rather than supporting only pure single-threaded programming, however, the parallel event loop can also be used to derive safe, high-level, parallel programming models characterized by a strong compatibility with single-threaded runtimes. We describe three distinct implementations of speculative runtimes enabling the parallel execution of event-based applications. The first implementation we describe is a pessimistic runtime system based on locks to implement speculative parallelization. The second and the third implementations are based on two distinct optimistic runtimes using software transactional memory. Each of the implementations supports the parallelization of applications written using an asynchronous single-threaded programming style, and each of them enables applications to benefit from parallel execution

    The parallel event loop model and runtime: a parallel programming model and runtime system for safe event-based parallel programming

    Get PDF
    Recent trends in programming models for server-side development have shown an increasing popularity of event-based single- threaded programming models based on the combination of dynamic languages such as JavaScript and event-based runtime systems for asynchronous I/O management such as Node.JS. Reasons for the success of such models are the simplicity of the single-threaded event-based programming model as well as the growing popularity of the Cloud as a deployment platform for Web applications. Unfortunately, the popularity of single-threaded models comes at the price of performance and scalability, as single-threaded event-based models present limitations when parallel processing is needed, and traditional approaches to concurrency such as threads and locks don't play well with event-based systems. This dissertation proposes a programming model and a runtime system to overcome such limitations by enabling single-threaded event-based applications with support for speculative parallel execution. The model, called Parallel Event Loop, has the goal of bringing parallel execution to the domain of single-threaded event-based programming without relaxing the main characteristics of the single-threaded model, and therefore providing developers with the impression of a safe, single-threaded, runtime. Rather than supporting only pure single-threaded programming, however, the parallel event loop can also be used to derive safe, high-level, parallel programming models characterized by a strong compatibility with single-threaded runtimes. We describe three distinct implementations of speculative runtimes enabling the parallel execution of event-based applications. The first implementation we describe is a pessimistic runtime system based on locks to implement speculative parallelization. The second and the third implementations are based on two distinct optimistic runtimes using software transactional memory. Each of the implementations supports the parallelization of applications written using an asynchronous single-threaded programming style, and each of them enables applications to benefit from parallel execution

    Fully asynchronous Java APIs for web applications

    Get PDF
    Applications for cloud environments present several non-functional requirements, such as execution time, responsiveness, and scalability. In order to address these requirements, Application Programming Interfaces (APIs) should be asynchronous, non-blocking, and cancelable. APIs for cloud environments work as a middleman between the servers/resources, and the calls of client applications. An asynchronous, non-blocking API means that requests to the API are executed concurrently, returning immediately after firing the request and before the work is completed. Cancelable means that we are able to interrupt the work started by a request before it finishes (e.g., in case it takes more than a certain amount of time), freeing its corresponding resources. Resources may be other APIs, files or databases. konkconsulting, the company where this dissertation will take place, already has extensive experience with C# and frameworks that address the challenges of cancellation in that ecosystem. However, they now need to develop similar solutions for Java/Kotlin, and the transition has proved to be challenging due to the nonexistence of an equivalent standard framework, as the one found in C#. Kotlin is a relatively new programming language that can be compiled to run on the Java Virtual Machine. It is also interoperable with Java. The objective of this dissertation is to find a Java/Kotlin framework that can better accommodate the implementation of an API with the mentioned capabilities. The framework will be selected from the set of available open-source tools, and shall then be extended to provide the required functionality. Kotlin offers a set of functionalities, such as coroutines, which will be extensively used in the implementation

    Static detection of anomalies in transactional memory programs

    Get PDF
    Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para a obtenção do Grau de Mestre em Engenharia InformáticaTransactional Memory (TM) is an approach to concurrent programming based on the transactional semantics borrowed from database systems. In this paradigm, a transaction is a sequence of actions that may execute in a single logical instant, as though it was the only one being executed at that moment. Unlike concurrent systems based in locks, TM does not enforce that a single thread is performing the guarded operations. Instead, like in database systems, transactions execute concurrently, and the effects of a transaction are undone in case of a conflict, as though it never happened. The advantages of TM are an easier and less error-prone programming model, and a potential increase in scalability and performance. In spite of these advantages, TM is still a young and immature technology, and has still to become an established programming model. It still lacks the paraphernalia of tools and standards which we have come to expect from a widely used programming paradigm. Testing and analysis techniques and algorithms for TM programs are also just starting to be addressed by the scientific community, making this a leading research work is many of these aspects. This work is aimed at statically identifying possible runtime anomalies in TMprograms. We addressed both low-level dataraces in TM programs, as well as high-level anomalies resulting from incorrect splitting of transactions. We have defined and implemented an approach to detect low-level dataraces in TM programs by converting all the memory transactions into monitor protected critical regions, synchronized on a newly generated global lock. To validate the approach, we have applied our tool to a set of tests, adapted from the literature, that contain well documented errors. We have also defined and implemented a new approach to static detection of high-level concurrency anomalies in TM programs. This new approach works by conservatively tracing transactions, and matching the interference between each consecutive pair of transactions against a set of defined anomaly patterns. Once again, the approach was validated with well documented tests adapted from the literature

    Domains: safe sharing among actors

    Get PDF
    The actor model is a concurrency model that avoids issues such as deadlocks and data races by construction, and thus facilitates concurrent programming. While it has mainly been used for expressing distributed computations, it is equally useful for modeling concurrent computations in a single shared memory machine. In component based software, the actor model lends itself to divide the components naturally over different actors and use message-passing concurrency for the interaction between these components. The tradeoff is that the actor model sacrifices expressiveness and efficiency with respect to parallel access to shared state. This paper gives an overview of the disadvantages of the actor model when trying to express shared state and then formulates an extension of the actor model to solve these issues. Our solution proposes domains and synchronization views to solve the issues without compromising on the semantic properties of the actor model. Thus, the resulting concurrency model maintains deadlock-freedom and avoids low-level data races

    Caching, crashing & concurrency - verification under adverse conditions

    Get PDF
    The formal development of large-scale software systems is a complex and time-consuming effort. Generally, its main goal is to prove the functional correctness of the resulting system. This goal becomes significantly harder to reach when the verification must be performed under adverse conditions. When aiming for a realistic system, the implementation must be compatible with the “real world”: it must work with existing system interfaces, cope with uncontrollable events such as power cuts, and offer competitive performance by using mechanisms like caching or concurrency. The Flashix project is an example of such a development, in which a fully verified file system for flash memory has been developed. The project is a long-term team effort and resulted in a sequential, functionally correct and crash-safe implementation after its first project phase. This thesis continues the work by performing modular extensions to the file system with performance-oriented mechanisms that mainly involve caching and concurrency, always considering crash-safety. As a first contribution, this thesis presents a modular verification methodology for destructive heap algorithms. The approach simplifies the verification by separating reasoning about specifics of heap implementations, like pointer aliasing, from the reasoning about conceptual correctness arguments. The second contribution of this thesis is a novel correctness criterion for crash-safe, cached, and concurrent file systems. A natural criterion for crash-safety is defined in terms of system histories, matching the behavior of fine-grained caches using complex synchronization mechanisms that reorder operations. The third contribution comprises methods for verifying functional correctness and crash-safety of caching mechanisms and concurrency in file systems. A reference implementation for crash-safe caches of high-level data structures is given, and a strategy for proving crash-safety is demonstrated and applied. A compatible concurrent implementation of the top layer of file systems is presented, using a mechanism for the efficient management of fine-grained file locking, and a concurrent version of garbage collection is realized. Both concurrency extensions are proven to be correct by applying atomicity refinement, a methodology for proving linearizability. Finally, this thesis contributes a new iteration of executable code for the Flashix file system. With the efficiency extensions introduced with this thesis, Flashix covers all performance-oriented concepts of realistic file system implementations and achieves competitiveness with state-of-the-art flash file systems

    Framework for Real-time collaboration on extensive Data Types using Strong Eventual Consistency

    Get PDF
    La collaboration en temps réel est un cas spécial de collaboration où les utilisateurs travaillent sur le même élément simultanément et sont au courant des modifications des autres utilisateurs en temps réel. Les données distribuées doivent rester disponibles et consistant tout en étant répartis sur plusieurs systèmes physiques. "Strong Consistency" est une approche qui crée un ordre total des opérations en utilisant des mécanismes tel que le "locking". Cependant, cela introduit un "bottleneck". Ces dix dernières années, les algorithmes de concurrence ont été étudiés dans le but de garder la convergence de tous les replicas sans utiliser de "locking" ni de synchronisation. "Operational Trans- formation" et "Conflict-free Replicated Data Types (CRDT)" sont utilisés dans ce but. Cependant, la complexité de ces stratégies les rend compliquées à intégrer dans des logicielles conséquents, comme les éditeurs de modèles, spécialement pour des data structures complexes comme les graphes. Les implémentations actuelles intègrent seulement des data linéaires tel que le texte. Dans ce mémoire, nous présentons CollabServer, un framework pour construire des environnements de collaboration. Il a une implémentation de CRDTs pour des data structures complexes tel que les graphes et donne la possibilité de construire ses propres data structures.Real-time collaboration is a special case of collaboration where users work on the same artefact simultaneously and are aware of each other’s changes in real-time. Shared data should remain available and consistent while dealing with its physically distributed aspect. Strong Consistency is one approach that enforces a total order of operations using mechanisms, such as locking. This however introduces a bottleneck. In the last decade, algorithms for concurrency control have been studied to keep convergence of all replicas without locking or synchronization. Operational Transformation and Conflict free Replicated Data Types (CRDT) are widely used to achieve this purpose. However, the complexity of these strategies makes it hard to integrate in large software, such as modeling editors, especially for complex data types like graphs. Current implementations only integrate linear data, such as text. In this thesis, we present CollabServer, a framework to build collaborative environments. It features a CRDTs implementation for complex data types such as graphs and gives possibility to build other data structures

    Language Design for Reactive Systems: On Modal Models, Time, and Object Orientation in Lingua Franca and SCCharts

    Get PDF
    Reactive systems play a crucial role in the embedded domain. They continuously interact with their environment, handle concurrent operations, and are commonly expected to provide deterministic behavior to enable application in safety-critical systems. In this context, language design is a key aspect, since carefully tailored language constructs can aid in addressing the challenges faced in this domain, as illustrated by the various concurrency models that prevent the known pitfalls of regular threads. Today, many languages exist in this domain and often provide unique characteristics that make them specifically fit for certain use cases. This thesis evolves around two distinctive languages: the actor-oriented polyglot coordination language Lingua Franca and the synchronous statecharts dialect SCCharts. While they take different approaches in providing reactive modeling capabilities, they share clear similarities in their semantics and complement each other in design principles. This thesis analyzes and compares key design aspects in the context of these two languages. For three particularly relevant concepts, it provides and evaluates lean and seamless language extensions that are carefully aligned with the fundamental principles of the underlying language. Specifically, Lingua Franca is extended toward coordinating modal behavior, while SCCharts receives a timed automaton notation with an efficient execution model using dynamic ticks and an extension toward the object-oriented modeling paradigm

    Kindly Bent to Free Us

    Get PDF
    Systems programming often requires the manipulation of resources like file handles, network connections, or dynamically allocated memory. Programmers need to follow certain protocols to handle these resources correctly. Violating these protocols causes bugs ranging from type mismatches over data races to use-after-free errors and memory leaks. These bugs often lead to security vulnerabilities. While statically typed programming languages guarantee type soundness and memory safety by design, most of them do not address issues arising from improper handling of resources. An important step towards handling resources is the adoption of linear and affine types that enforce single-threaded resource usage. However, the few languages supporting such types require heavy type annotations. We present Affe, an extension of ML that manages linearity and affinity properties using kinds and constrained types. In addition Affe supports the exclusive and shared borrowing of affine resources, inspired by features of Rust. Moreover, Affe retains the defining features of the ML family: it is an impure, strict, functional expression language with complete principal type inference and type abstraction. Affe does not require any linearity annotations in expressions and supports common functional programming idioms.Comment: ICFP 202

    Retrofitting Typestates into Rust

    Get PDF
    As software becomes more prevalent in our lives, bugs are able to cause significant disruption. Thus, preventing them becomes a priority when trying to develop dependable systems. While reducing their occurrence possibility to zero is infeasible, existing approaches are able to eliminate certain subsets of bugs. Rust is a systems programming language that addresses memory-related bugs by design, eliminating bugs like use-after-free. To achieve this, Rust leverages the type system along with information about object lifetimes, allowing the compiler to keep track of objects throughout the program and checking for memory misusage. While preventing memory-related bugs goes a long way in software security, other categories of bugs remain in Rust. One of which would be Application Programming Interface (API) misusage, where the developer does not respect constraints put in place by an API, thus resulting in the program crashing. Typestates elevate state to the type level, allowing for the enforcement of API constraints at compile-time, relieving the developer from the burden that is keeping track of the possible computation states at runtime, and preventing possible API misusage during development. While Rust does not support typestates by design, the type system is powerful enough to express and validate typestates. I propose a new macro-based approach to deal with typestates in Rust; this approach provides an embedded Domain-Specific Language (DSL) which allows developers to express typestates using only existing Rust syntax. Furthermore, Rust’s macro system is leveraged to extract a state machine out of the typestate specification and then perform compile-time checks over the specification. Afterwards we leverage Rust’s type system to check protocol-compliance. The DSL avoids workflow-bloat by requiring nothing but a Rust compiler and the library itself.À medida que as nossas vidas estão cada vez mais dependentes de software, os erros do mesmo têm o potencial de causar problemas significativos. Prevenir estes erros torna-se uma tarefa prioritária durante o desenvolvimento de sistemas confiáveis. Erradicar erros por completo é impossível, mas é possível eliminar certos conjuntos. Rust é uma linguagem de programação de sistemas que, por desenho, endereça erros de gestão de memória. Para o conseguir, a linguagem inclui no sistema de tipos informação sobre o tempo de vida dos objetos, permitindo assim que o compilador conheça a utilização dos mesmos e detecte erros de utilização de memória. Apesar da prevenção de erros de memória ter um papel importante na segurança de software, existem ainda outras categorias de erros em Rust, como o uso incorrecto de interfaces de programação, em que o programador não respeita as restrições impostas pela mesma, o que resulta numa falha do programa. Typestates elevam o conceito de estado para o sistema de tipos, permitindo a aplicação das restrições da interface durante a fase de compilação. Este conceito permite assim aliviar o programador da responsabilidade que é conceptualizar e manter o estado do programa em mente durante o desenvolvimento, prevenindo o mau uso das interfaces. Apesar de Rust não suportar typestates de uma forma natural, o sistema de tipos permite expressar e validar typestates. Proponho uma nova abordagem de modo a lidar com typestates em Rust, tal abordagem é baseada numa DSL embebida na linguagem, permitindo assim a descrição de typestates usando apenas a sintaxe existente. A DSL vai mais além e providencia ainda verificações estáticas sobre a especificação, tirando proveito do sistema de macros, extrai uma máquina de estados que é depois verificada, por fim, a verificação de conformidade é feita pelo compilador, tirando proveito do sistema de tipos. A DSL evita poluição do ambiente trabalho, requerendo apenas um compilador de Rust e a sua própria biblioteca
    corecore