130 research outputs found

    An Abstraction-Refinement Theory for the Analysis and Design of Real-Time Systems

    Get PDF
    Component-based and model-based reasonings are key concepts to address the increasing complexity of real-time systems. Bounding abstraction theories allow to create efficiently analyzable models that can be used to give temporal or functional guarantees on non-deterministic and non-monotone implementations. Likewise, bounding refinement theories allow to create implementations that adhere to temporal or functional properties of specification models. For systems in which jitter plays a major role, both best-case and worst-case bounding models are needed. In this paper we present a bounding abstraction-refinement theory for real-time systems. Compared to the state-of-the-art TETB refinement theory, our theory is less restrictive with respect to the automatic lifting of properties from component to graph level and does not only support temporal worst-case refinement, but evenhandedly temporal and functional, best-case and worst-case abstraction and refinement

    Verification of Branching-Time and Alternating-Time Properties for Exogenous Coordination Models

    Get PDF
    Information and communication systems enter an increasing number of areas of daily lives. Our reliance and dependence on the functioning of such systems is rapidly growing together with the costs and the impact of system failures. At the same time the complexity of hardware and software systems extends to new limits as modern hardware architectures become more and more parallel, dynamic and heterogenous. These trends demand for a closer integration of formal methods and system engineering to show the correctness of complex systems within the design phase of large projects. The goal of this thesis is to introduce a formal holistic approach for modeling, analysis and synthesis of parallel systems that potentially addresses complex system behavior at any layer of the hardware/software stack. Due to the complexity of modern hardware and software systems, we aim to have a hierarchical modeling framework that allows to specify the behavior of a parallel system at various levels of abstraction and that facilitates designing complex systems in an iterative refinement procedure, in which more detailed behavior is added successively to the system description. In this context, the major challenge is to provide modeling formalisms that are expressive enough to address all of the above issues and are at the same time amenable to the application of formal methods for proving that the system behavior conforms to its specification. In particular, we are interested in specification formalisms that allow to apply formal verification techniques such that the underlying model checking problems are still decidable within reasonable time and space bounds. The presented work relies on an exogenous modeling approach that allows a clear separation of coordination and computation and provides an operational semantic model where formal methods such as model checking are well suited and applicable. The channel-based exogenous coordination language Reo is used as modeling formalism as it supports hierarchical modeling in an iterative top-down refinement procedure. It facilitates reusability, exchangeability, and heterogeneity of components and forms the basis to apply formal verification methods. At the same time Reo has a clear formal semantics based on automata, which serve as foundation to apply formal methods such as model checking. In this thesis new modeling languages are presented that allow specifying complex systems in terms of Reo and automata models which yield the basis for a holistic approach on modeling, verification and synthesis of parallel systems. The second main contribution of this thesis are tailored branching-time and alternating time temporal logics as well as corresponding model checking algorithms. The thesis includes results on the theoretical complexity of the underlying model checking problems as well as practical results. For the latter the presented approach has been implemented in the symbolic verification tool set Vereofy. The implementation within Vereofy and evaluation of the branching-time and alternating-time model checker is the third main contribution of this thesis

    Foundations for Safety-Critical on-Demand Medical Systems

    Get PDF
    In current medical practice, therapy is delivered in critical care environments (e.g., the ICU) by clinicians who manually coordinate sets of medical devices: The clinicians will monitor patient vital signs and then reconfigure devices (e.g., infusion pumps) as is needed. Unfortunately, the current state of practice is both burdensome on clinicians and error prone. Recently, clinicians have been speculating whether medical devices supporting ``plug & play interoperability\u27\u27 would make it easier to automate current medical workflows and thereby reduce medical errors, reduce costs, and reduce the burden on overworked clinicians. This type of plug & play interoperability would allow clinicians to attach devices to a local network and then run software applications to create a new medical system ``on-demand\u27\u27 which automates clinical workflows by automatically coordinating those devices via the network. Plug & play devices would let the clinicians build new medical systems compositionally. Unfortunately, safety is not considered a compositional property in general. For example, two independently ``safe\u27\u27 devices may interact in unsafe ways. Indeed, even the definition of ``safe\u27\u27 may differ between two device types. In this dissertation we propose a framework and define some conditions that permit reasoning about the safety of plug & play medical systems. The framework includes a logical formalism that permits formal reasoning about the safety of many device combinations at once, as well as a platform that actively prevents unintended timing interactions between devices or applications via a shared resource such as a network or CPU. We describe the various pieces of the framework, report some experimental results, and show how the pieces work together to enable the safety assessment of plug & play medical systems via a two case-studies

    Polyhedral+Dataflow Graphs

    Get PDF
    This research presents an intermediate compiler representation that is designed for optimization, and emphasizes the temporary storage requirements and execution schedule of a given computation to guide optimization decisions. The representation is expressed as a dataflow graph that describes computational statements and data mappings within the polyhedral compilation model. The targeted applications include both the regular and irregular scientific domains. The intermediate representation can be integrated into existing compiler infrastructures. A specification language implemented as a domain specific language in C++ describes the graph components and the transformations that can be applied. The visual representation allows users to reason about optimizations. Graph variants can be translated into source code or other representation. The language, intermediate representation, and associated transformations have been applied to improve the performance of differential equation solvers, or sparse matrix operations, tensor decomposition, and structured multigrid methods

    Estimation and Optimization of the Performance of Polyhedral Process Networks

    Get PDF
    A system-level design methodology such as Daedalus provides designers with a forward synthesis flow for automated design, programming, and implementation of multiprocessor systems-on-chip. Daedalus employs the polyhedral process network model of computation to represent applications. These networks are automatically derived from sequential C code. A forward synthesis flow greatly increases designer productivity. Still, the designer needs to perform a time-consuming forward synthesis step to learn if a network satisfies his performance constraints. Furthermore, it is not trivial to select a set of transformations and transformation parameters for a network such that performance requirements are met. A forward synthesis flow thus solves only part of a design problem, as it does not provide fast feedback on the transformations a designer should apply to meet his performance constraints. This dissertation intro duces different performance estimation techniques for polyhedral process networks. The most promising technique is the profiling-based cprof technique that works directly on the sequential application code. This makes cprof an easy-to-use, robust, and fast technique, without the need to derive a polyhedral process network. This dissertation then discusses four transformations and analyzes factors that affect the efficacy of each transformation.Computer Systems, Imagery and Medi

    A design methodology for compositional high-level synthesis of communication-centric SoCs

    Get PDF
    Systems-on-chip are increasingly designed at the system level by combining synthesizable IP components that operate concurrently while interacting through communication channels. CAD-tool vendors support this System-Level Design approach with high-level synthesis tools and libraries of interface primitives implementing the communication protocols. These interfaces absorb timing differences in the hardware-component implementations, thus enabling compositional design. However, they introduce also new challenges in terms of functional correctness and performance optimization. We propose a methodology that combines performance analysis and optimization algorithms to automatically address the issues that SoC designers may accidentally introduce when assembling components that are specified at the system level. Copyright 2014 ACM

    A design methodology for compositional high-level synthesis of communication-centric SoCs.

    Get PDF
    ABSTRACT Systems-on-chip are increasingly designed at the system level by combining synthesizable IP components that operate concurrently while interacting through communication channels. CAD-tool vendors support this System-Level Design approach with high-level synthesis tools and libraries of interface primitives implementing the communication protocols. These interfaces absorb timing differences in the hardware-component implementations, thus enabling compositional design. However, they introduce also new challenges in terms of functional correctness and performance optimization. We propose a methodology that combines performance analysis and optimization algorithms to automatically address the issues that SoC designers may accidentally introduce when assembling components that are specified at the system level

    Dynamic Partitioning in Linear Relation Analysis. Application to the Verification of Synchronous Programs

    Get PDF
    We apply linear relation analysis [CH78, HPR97] to the verificationof declarative synchronous programs [Hal98]. In this approach,state partitioning plays an important role: on one hand the precision of the results highly depends on the fineness of the partitioning; on the other hand, a too much detailed partitioning may result in an exponential explosion of the analysis. In this paper we propose to consider very general partitions of the state space and to dynamically select a suitable partitioning according to the property to be proved. The presented approach is quite general and can be applied to other abstract interpretations.Keywords and Phrases: Abstract Interpretation, Partitioning,Linear Relation Analysis, Reactive Systems, Program Verificatio

    Review of System Design Frameworks

    Get PDF
    In the last decade, the enormous development of the semiconductor industry with ever-increasing complexities of digital embedded systems and strong market competition with fast time-to-market and low design cost demands have imposed serious difficulty to a conventional design method. Therefore, there emerges a new design flow named model-based system design, which is based on high-level abstraction models, heavy design automation, and extensive component reuse to increase productivity and satisfy the market pressure. This thesis presents reviews of ten high level academic system design frameworks and tools that have been proposed and implemented recently to support the model based design flow, namely System-on-Chip Environment (SCE), Embedded System Environment (ESE), Metropolis, Daedalus, SystemCoDesigner (SCD), xPilot, GAUT, No-Instruction-Set Computer (NISC), Formal System Design (ForSyDe), and Ptolemy II. These tools are then compared to each other in various aspects comprising objective, technique, implementation and capability. Following that, three design flow frameworks, including ESE, Daedalus, and SystemCoDesigner, are experimented for their real usage, performance and practicality. The frameworks and tools implementing the model-based design flow all show promising results. Modelling tools (ForSyDe, and Ptolemy II) can sufficiently capture a wide range of complicated modern systems, while high-level synthesis tools (xPilot, GAUT, and NISC) produce better design qualities in terms of area, power, and cost in comparison to traditional works. Study cases of design flow frameworks (SCE, ESE, Metropolis, Daedalus, and SCD) show the model-based method significantly reduces developing time as well as facilitates the system design process. However, most of these tools and frameworks are being incomplete, and still under the experimental stage. There still be a lot of works needed until the method can be put into practice
    • …
    corecore