18 research outputs found
Predictable real-time software synthesis
Formal theories for real-time systems (such as timed process algebra, timed automata and timed petri nets) have gained great success in the modeling of concurrent timing behavior and in the analysis of real-time properties. However, due to the ineliminable timing differences between a model and its realization, synthesizing a software realization from a model in a predictable way is still a challenging research topic. In this article, we tackle this problem by solving a set of sub-problems. The solution is based on the theoretical results for property prediction proposed in Huang et al. (2003, Real-time property preservation in approximations of timed systems. In: Proceedings of 1st ACM and IEEE international conference on formal methods and models for codesign. IEEE Computer Society, Los Alamitos, pp 163---171) and Huang (2005, Predictability in real-time system design. PhD thesis, Eindhoven University of Technology, The Netherlands), where quantitative property relations are established between two absolute/relative close real-time systems. This theory basically implies that if two systems are close , they enjoy similar properties. These results cannot be directly applied in practice though, because a model and its realization typically have infinitely large absolute and relative timing differences. We show that this infinite time gap can be bridged through a sequence of carefully constructed intermediate time domains. Then the property-prediction results can be applied to any pair of adjacent time domains in the sequence. Consequently, real-time properties of the implementation can be predicted from the model. We propose two parameterized hypotheses to characterize the timing differences in the sequence and to guide a correctness-preserving design process. It is shown that these hypotheses can be incorporated in a concrete tool set. We demonstrate the feasibility of the predictable synthesis approach through the design of a railroad crossing system
Branching-time property preservation between real-time systems
In the past decades, many formal frameworks (e.g. timed automata and temporal logics) and techniques (e.g. model checking and theorem proving) have been proposed to model a real-time system and to analyze real-time properties of the model. However, due to the existence of ineliminable timing differences between the model and its realization, real-time properties verified in the model often cannot be preserved in its realization. In this paper, we propose a branching representation (timed state tree) to specify the timing behavior of a system, based on which we prove that real-time properties represented by Timed CTL∗ (TCTL∗ in short) formulas can be preserved between two neighboring real-time systems. This paper extends the results in [1][2], such that a larger scope of real-time properties can be preserved between real-time systems
Third Dutch model checking day, Eindhoven, November 7, 2001 : proceedings
This report contains the preliminary proceedings of the third Dutch Model
Checking Day, held on 7th November 2001 at the Technische Universiteit Eindhoven.
Model checking is an automatic technique for verifying hardware and software
systems. The advance of the research in this area in the past few years
has lead to a significant improvement of the model checking tools. Successful
applications of model checking have been reported in the verification of a
wide variety of systems, like complex sequential circuit designs and communication
protocols. An important evidence of the great practical potential of model
checking is the development of in-house model checking tools within the major
companies from the information and telecommunication industry.
The objective of the Model Checking Day was to bring together researchers
and practitioners from academia and industry who are interested in model checking.
The presentations featured both practical and theoretical advances in the
area. This includes new techniques and methodologies, as well as experience
with their application in various areas, such as embedded systems, communication
protocols, hardware components, production processes, etc.
Besides this, the Model Checking Day provided an opportunity to exchange
experiences, and to have discussions about new ideas and the latest developments
in the area.
This proceedings contains contributions related to the presentations on this
day, details are given in the table of contents. The Model Checking Day received
generous support from the Formal Methods Group of the Technische
Universiteit Eindhoven and the research school IPA (Institute for Programming
research and Algorithmics). At this point I would like to thank the members
of the program committee Dragan Bosnacki (TU/e Computer Science), Leszek
Holenderski (Philips Research) and Jeroen Voeten (TU/e Electrical Engineering),
and the secretary Elize Russell (TU/e Computer Science) for all their work
A Model Driven Approach for Mechatronic Systems
The software design is one of the most challenging tasks during the design of a mechatronic system. On one hand, it has to provide solutions to deal with concurrency and timeliness issues of the system. On the other hand, it has to glue different disciplines (such as software, control and mechanical) of the system as a whole. In this paper, we propose a model-driven approach to design the software part of a mechatronic system, which consists of two major parts: systematic modeling and correctness-preserving synthesis. The modeling stage is divided into four steps, which focus on different aspects (such as concurrency, multiple disciplines and timeliness) of the system respectively. In particular, we propose a set of handshake patterns to capture the concurrent aspect of the system. These patterns assist designers to build up an adequate top-level model efficiently. Furthermore, they separate the system into a set of concurrent components, each of which can be further refined independently. Subsequently, the multidisciplinary and realtime aspects of the system are naturally specified and analyzed in a series of refinements. After the important aspects of the system are specified and analyzed in a unified model, a software implementation is automatically synthesized from the model, the correctness of which is ensured by construction. The effectiveness of the proposed approach is illustrated by a complex production cell system
Y-chart based system design: a discussion on approaches
Embedded systems are a source of technology that facilitates our modern lifestyle. In order to do so, they tend to increase in complexity as well as integrate in are day-to-day activities. To meet the market's expectations on technological improvement, time-to-market objectives for introducing innovative embedded systems are shorter than ever. Over the last decade, model-based design has been a subject of great interest as a means to accelerate the design of embedded systems. The Y-chart paradigm is a principal approach to model-based embedded system design. Despite the simplicity and conciseness of this paradigm, it has been implemented in several di®erent ways by various methodologies. This variety in implementation designs is due to the particular emphasis a methodology puts on the di®erent steps of the paradigm (application modeling, platform modeling, mapping, analysis and synthesis). This article explores this variety by examining and comparing three Y-chart based design methodologies: Metropolis, the Distributed Operation Layer incorporating Modular Performance Analysis and the Y-chart variant of the Software/Hardware Engineering methodology. These methodologies have been chosen because they: cover a broad domain of applications, have been developed on a relatively long period of time and are representative of typical Y chart approaches. Moreover, these implementations of the paradigm present interesting design approaches which are worth comparin
Timing prediction for service-based applications mapped on linux-based multi-core platforms
\u3cp\u3eWe develop a model-based approach to predict timing of service-based software applications on Linux-based multi-core platforms for alternative mappings (affinity and priority settings). Service-based applications consist of communicating sequential (Linux) processes. These processes execute functions (also called services), but can only execute them one at a time. Models are inferred automatically from execution traces to enable timing optimization of existing (legacy) systems. Our approach relies on a linear progress approximation of functions. We compute the expected share of each function based on the mapping (affinity and priority) parameters and the functions that are currently active. We validate our models by carrying out a controlled lab experiment consisting of a multi-process pipelined application mapped in different ways on a quadcore Intel i7 processor. A broad class of affinity and priority settings is fundamentally unpredictable due to Linux binding policies. We show that predictability can be achieved if the platform is partitioned in disjoint clusters of cores such that i) each process is bound to such a cluster, ii) processes with non real-time priorities are bound to singleton clusters, and iii) all processes bound to a non-singleton cluster have different real-time priorities. For mappings using singleton clusters with niceness priorities only, our model predicts execution latencies (for each pipeline iteration) with errors less than 5% relative to the measured execution times. For mappings using a non-singleton cluster (with different real-time priorities) relative errors of less than 2% are obtained. When real-time and niceness priorities are mixed, we predict with errors of 7%.\u3c/p\u3
Exploiting specification modularity to prune the optimization-space of manufacturing systems
\u3cp\u3eIn this paper we address the makespan optimization of industrial-sized manufacturing systems. We introduce a framework which species functional system requirements in a compositional way and automatically computes makespan optimal solutions respecting these requirements. We show the optimization problem to be NP-Hard. To scale towards systems of industrial complexity, we propose a novel approach based on a subclass of compositional requirements which we call constraints. We prove that these constraints always prune the worst-case optimization-space thereby increasing the odds of nding an optimal solution (with respect to the additional constraints). We demonstrate the applicability of the framework on an industrial-sized manufacturing system.\u3c/p\u3