8 research outputs found

    Programming and Timing Analysis of Parallel Programs on Multicores

    Get PDF
    International audienceMulticore processors provide better power-performance trade-offs compared to single-core processors. Consequently, they are rapidly penetrating market segments which are both safety critical and hard real-time in nature. However, designing time-predictable embedded applications over multicores remains a considerable challenge. This paper proposes the ForeC language for the deterministic parallel programming of embedded applications on multicores. ForeC extends C with a minimal set of constructs adopted from synchronous languages. To guarantee the worst-case performance of ForeC programs, we offer a very precise reachability- based timing analyzer. To the best of our knowledge, this is the first attempt at the efficient and deterministic parallel programming of multicores using a synchronous C-variant. Experimentation with large multicore programs revealed an average over-estimation of only 2% for the computed worst-case execution times (WCETs). By reducing our representation of the programs state-space, we reduced the analysis time for the largest program (with 43, 695 reachable states) by a factor of 342, to only 7 seconds

    Real-Time Ticks for Synchronous Programming

    Get PDF
    International audienceWe address the problem of synchronous programs that cannot be easily executed in a classical time-triggered or event-triggered execution loop. We propose a novel approach, referred to as dynamic ticks, that reconciles the semantic timing abstraction of the synchronous approach with the desire to give the application fine-grained control over its real-time behavior. The main idea is to allow the application to dynamically specify its own wake-up times rather than ceding their control to the environment. As we illustrate in this paper, synchronous languages such as Esterel are already well equipped for this; no language extensions are needed. All that is required is a rather minor adjustment of the way the tick function is called

    Practical causality handling for synchronous languages

    Get PDF
    The synchronous principle is a well-established paradigm for reconciling concurrency with determinism. A key is to establish at compile time that a program or model is causal, which basically means that there exists a schedule that obeys the rules put down by the language. This rules out surprises at run time; however, in practice it can be rather cumbersome for the developer to cure causality problems, in particular as programs/models get more complex. We here propose to tackle this issue in two ways. Firstly, we propose to enrich the scheduling regime allowed by the language to not only consider data dependencies, but also explicit scheduling directives that operate on statements or coarser scheduling units. These directives may be used by the developer, or also by model-to-model transformations within the compiler. Secondly, we propose to enhance programming/modeling environments to guide the developer in finding causality issues. Specifically, we propose dedicated causality views that highlight data dependencies involved in scheduling conflicts, and structure-based editing to efficiently add scheduling directives. We illustrate our proposals for the SCCharts language. An Eclipse-based implementation based on the KIELER framework is available as open source

    Code generation for multi-phase tasks on a multi-core distributed memory platform

    Get PDF
    International audienceEnsuring temporal predictability of real-time systems on a multi-core platform is difficult, mainly due to hard to predict delays related to shared access to the main memory. Task models where computation phases and communication phases are separated (such as the PRedictable Execution Model), have been proposed to both mitigate these delays and make them easier to analyze. In this paper we present a compilation process, part of the Prelude compiler, that automatically translates a high-level synchronous data-flow system specification into a PREM-compliant C program. By automating the production of the PREM-compliant C code, low-level implementation concerns related to task communications become the responsibility of the compiler, which saves tedious and error-prone development efforts

    Modal Reactors

    Full text link
    Complex software systems often feature distinct modes of operation, each designed to handle a particular scenario that may require the system to respond in a certain way. Breaking down system behavior into mutually exclusive modes and discrete transitions between modes is a commonly used strategy to reduce implementation complexity and promote code readability. However, such capabilities often come in the form of self-contained domain specific languages or language-specific frameworks. The work in this paper aims to bring the advantages of modal models to mainstream programming languages, by following the polyglot coordination approach of Lingua Franca (LF), in which verbatim target code (e.g., C, C++, Python, Typescript, or Rust) is encapsulated in composable reactive components called reactors. Reactors can form a dataflow network, are triggered by timed as well as sporadic events, execute concurrently, and can be distributed across nodes on a network. With modal models in LF, we introduce a lean extension to the concept of reactors that enables the coordination of reactive tasks based on modes of operation. The implementation of modal reactors outlined in this paper generalizes to any LF-supported language with only modest modifications to the generic runtime system

    Interactive Model-Based Compilation: A Modeller-Driven Development Approach

    Get PDF
    There is a growing tendency for using domain-specific languages, which help domain experts to stay focussed on abstract problem solutions. It is important to carefully design these languages and tools, which fundamentally perform model-to-model transformations. The quality of both usually decides the effectiveness of the subsequent development and therefore the quality of the final applications. However, as the complexity and safety requirements of modern systems grow, it becomes increasingly burdensome to create highly customized languages and difficult to provide reasonable overviews within these tools. This thesis introduces a new interactive model-based compilation methodology. Compilations for arbitrary model-to-model transformations are themselves described as models. They can be instantiated for particular inputs, e. g. a program, to create concrete compilation runs, which return the result of that compilation. The compilation instance is interactively observable. Intermediate results serve as new inputs and as documentation. They can be used to create highly customized views and facilitate understandability. This methodology guides modellers from the start of the compilation to the final result so that they can interactively refine their models. The methodology has been implemented and validated as the KIELER Compiler (KiCo) and is available as part of the KIELER open-source project. It is used to implement the current reference compiler for the SCCharts language, a statecharts dialect designed for specifying safety-critical reactive systems based on a synchronous model of computation. The interactive model-based compilation approach was key to the rapid prototyping of three different compilation strategies, as well as new language extensions, variations and closely related languages. The results are verified with benchmarks, which are again modelled using the same approach and technology. The usability of the SCCharts language and the KiCo tooling is documented with long-term surveys and real-life industrial, academic and teaching examples

    Concurrency and Distribution in Reactive Programming

    Get PDF
    Distributed Reactive Programming is a paradigm for implementing distributed interactive applications modularly and declaratively. Applications are defined as dynamic distributed dataflow graphs of reactive computations that depend upon each other, similar to formulae in spreadsheets. A runtime library propagates input changes through the dataflow graph, recomputing the results of affected dependent computations while adapting the dataflow graph topology to changing dependency relations on the fly. Reactive Programming has been shown to improve code quality, program comprehension and maintainability over modular interactive application designs based on callbacks. Part of what makes Reactive Programming easy to use is its synchronous change propagation semantics: Changes are propagated such that no computation can ever observe an only partially updated state of the dataflow graph, i.e., each input change together with all dependent recomputations appears to be instantaneous. Traditionally, in local single-threaded applications, synchronous semantics are achieved through glitch freedom consistency: a recomputation may be executed only after all its dependencies have been recomputed. In distributed applications though, this established approach breaks in two ways. First, glitch freedom implies synchronous semantics only if change propagations execute in isolation. Distributed applications are inherently exposed to concurrency in that multiple threads may propagate different changes concurrently. Ensuring isolation between concurrent change propagations requires the integration of additional algorithms for concurrency control. Second, applications’ dataflow graphs are spread across multiple hosts. Therefore, distributed reactive programming requires algorithms for both glitch freedom and concurrency control that are decentralized, i.e., function without access to shared memory. A comprehensive survey of related prior works shows, that none have managed to solve this issue so far. This dissertation introduces FrameSweep, the first algorithm to provide synchronous propagation semantics for concurrent change propagation over dynamic distributed dataflow graphs. FrameSweep provides isolation and glitch freedom through a combination of fine-grained concurrency control algorithms for linearizability and serializability from databases with several aspects of change propagation algorithms from Reactive Programming and Automatic Incrementalization. The correctness of FrameSweep is formally proven through the application of multiversion concurrency control theory. FrameSweep decentralizes all its algorithmic components, and can therefore be integrated into any reactive programming library in a syntactically transparent manner: Applications can continue to use the same syntax for Reactive Programming as before, without requiring any adaptations of their code. FrameSweep therefore provides the exact same semantics as traditional local single-threaded Reactive Programming through the exact same syntax. As a result, FrameSweep proves by example that interactive applications can reap all benefits of Reactive Programming even if they are concurrent or distributed. A comprehensive empirical evaluation measures through benchmarks, how FrameSweep’s performance and scalability are affected by a multitude of factors, e.g., thread contention, dataflow topology, dataflow topology changes, or distribution topology. It shows that FrameSweep’s performance compares favorably to alternative scheduling approaches with weaker guarantees in local applications. An existing Reactive Programming application is migrated to FrameSweep to empirically verify the claims of semantic and syntactic transparency

    The ForeC Synchronous Deterministic Parallel Programming Language for Multicores

    Get PDF
    International audienceCyber-physical systems (CPSs) are embedded systems that are tightly integrated with their physical environment. The correctness of a CPS depends on the output of its computations and on the timeliness of completing the computations. This paper proposes the ForeC language for the deterministic parallel programming of CPS applications on multi-core execution platforms. ForeC's synchronous semantics is designed to greatly simplify the understanding and debugging of parallel programs. ForeC allows programmers to express many forms of parallel patterns while ensuring that programs are amenable to static timing analysis. One of ForeC's main innovation is its shared variable semantics that provides thread isolation and deterministic thread communication. Through benchmarking, we demonstrate that ForeC can achieve better parallel performance than Esterel, a widely used synchronous language for concurrent safety-critical systems, and OpenMP, a popular desktop solution for parallel programming. We demonstrate that the worst-case execution time of ForeC programs can be estimated precisely
    corecore