4 research outputs found

    Grounding Synchronous Deterministic Concurrency in Sequential Programming

    Get PDF
    In this report, we introduce an abstract interval domain I(D; P) and associated fixed point semantics for reasoning about concurrent and sequential variable accesses within a synchronous cycle-based model of computation. The interval domain captures must (lower bound) and cannot (upper bound) information to approximate the synchronisation status of variables consisting of a value status D and an init status P. We use this domain for a new behavioural definition of Berry’s causality analysis for Esterel. This gives a compact and uniform understanding of Esterel-style constructiveness for shared-memory multi-threaded programs. Using this new domain-theoretic characterisation we show that Berry’s constructive semantics is a conservative approximation of the recently proposed sequentially constructive (SC) model of computation. We prove that every Berry-constructive program is sequentially constructive, i.e., deterministic and deadlock-free under sequentially admissible scheduling. This gives, for the first time, a natural interpretation of Berry-constructiveness for main-stream imperative programming in terms of scheduling, where previous results were cast in terms of synchronous circuits. It also opens the door to a direct mapping of Esterel’s signal mechanism into boolean variables that can be set and reset arbitrarily within a tick. We illustrate the practical usefulness of this mapping by discussing how signal reincarnation is handled efficiently by this transformation, which is of complexity that is linear in progra

    Clock Refinement in Imperative Synchronous Languages

    Get PDF
    An huge amount of computational models and programming languages have been proposed for the description of embedded systems. In contrast to traditional sequential programming languages, they cope directly with the requirements for embedded systems: direct support for concurrent computations and periodic interaction with the environment are only some of the features they offer. Synchronous languages are one class of languages for the development of embedded systems and they follow the fundamental principle that the execution is divided into a sequence of logical steps. Thereby, each step follows the simplification that the computation of the outputs is finished directly when the inputs are available. This rigorous abstraction leads to well-defined deterministic parallel composition in general, and to deterministic abortion and suspension in imperative synchronous languages in particular. These key features also allow to translate programs to hardware and software, and also formal verification techniques like model checking can be easily applied. Besides the advantages of imperative synchronous languages, also some drawbacks can be listed. Over-synchronization is an effect being caused by parallel threads which have to synchronize for each execution step, even if they do not communicate, since the synchronization is implicitly forced by the control-flow. This thesis considers the idea of clock refinement to introduce several abstraction layers for communication and synchronization in addition to the existing single-clock abstraction. Thereby, clocks can be refined by several independent clocks so that a controlled amount of asynchrony between subsequent synchronization points can be exploited by compilers. The declarations of clocks form a tree, and clocks can be defined within the threads of the parallel statement, which allows one to do independent computations based on these clocks without synchronizing the threads. However, the synchronous abstraction is kept at each level of the abstraction. Clock refinement is introduced in this thesis as an extension to the imperative synchronous language Quartz. Therefore, new program statements are introduced which allow to define a new clock as a refinement of an existing one and to finish a step based on a certain clock. Examples are considered to show the impact of the behavior of the new statements to the already existing statements, before the semantics of this extension is formally defined. Furthermore, the thesis presents a compile algorithm to translate programs to an intermediate format, and to translate the intermediate format to a hardware description. The advantages obtained by the new modeling feature are finally evaluated based on examples

    Clock Refinement in Imperative Synchronous Languages

    No full text
    An huge amount of computational models and programming languages have been proposed for the description of embedded systems. In contrast to traditional sequential programming languages, they cope directly with the requirements for embedded systems: direct support for concurrent computations and periodic interaction with the environment are only some of the features they offer. Synchronous languages are one class of languages for the development of embedded systems and they follow the fundamental principle that the execution is divided into a sequence of logical steps. Thereby, each step follows the simplification that the computation of the outputs is finished directly when the inputs are available. This rigorous abstraction leads to well-defined deterministic parallel composition in general, and to deterministic abortion and suspension in imperative synchronous languages in particular. These key features also allow to translate programs to hardware and software, and also formal verification techniques like model checking can be easily applied. Besides the advantages of imperative synchronous languages, also some drawbacks can be listed. Over-synchronization is an effect being caused by parallel threads which have to synchronize for each execution step, even if they do not communicate, since the synchronization is implicitly forced by the control-flow. This thesis considers the idea of clock refinement to introduce several abstraction layers for communication and synchronization in addition to the existing single-clock abstraction. Thereby, clocks can be refined by several independent clocks so that a controlled amount of asynchrony between subsequent synchronization points can be exploited by compilers. The declarations of clocks form a tree, and clocks can be defined within the threads of the parallel statement, which allows one to do independent computations based on these clocks without synchronizing the threads. However, the synchronous abstraction is kept at each level of the abstraction. Clock refinement is introduced in this thesis as an extension to the imperative synchronous language Quartz. Therefore, new program statements are introduced which allow to define a new clock as a refinement of an existing one and to finish a step based on a certain clock. Examples are considered to show the impact of the behavior of the new statements to the already existing statements, before the semantics of this extension is formally defined. Furthermore, the thesis presents a compile algorithm to translate programs to an intermediate format, and to translate the intermediate format to a hardware description. The advantages obtained by the new modeling feature are finally evaluated based on examples
    corecore