23 research outputs found

    Hybrid Multiresolution Simulation & Model Checking: Network-On-Chip Systems

    Get PDF
    abstract: Designers employ a variety of modeling theories and methodologies to create functional models of discrete network systems. These dynamical models are evaluated using verification and validation techniques throughout incremental design stages. Models created for these systems should directly represent their growing complexity with respect to composition and heterogeneity. Similar to software engineering practices, incremental model design is required for complex system design. As a result, models at early increments are significantly simpler relative to real systems. While experimenting (verification or validation) on models at early increments are computationally less demanding, the results of these experiments are less trustworthy and less rewarding. At any increment of design, a set of tools and technique are required for controlling the complexity of models and experimentation. A complex system such as Network-on-Chip (NoC) may benefit from incremental design stages. Current design methods for NoC rely on multiple models developed using various modeling frameworks. It is useful to develop frameworks that can formalize the relationships among these models. Fine-grain models are derived using their coarse-grain counterparts. Moreover, validation and verification capability at various design stages enabled through disciplined model conversion is very beneficial. In this research, Multiresolution Modeling (MRM) is used for system level design of NoC. MRM aids in creating a family of models at different levels of scale and complexity with well-formed relationships. In addition, a variant of the Discrete Event System Specification (DEVS) formalism is proposed which supports model checking. Hierarchical models of Network-on-Chip components may be created at different resolutions while each model can be validated using discrete-event simulation and verified via state exploration. System property expressions are defined in the DEVS language and developed as Transducers which can be applied seamlessly for model checking and simulation purposes. Multiresolution Modeling with verification and validation capabilities of this framework complement one another. MRM manages the scale and complexity of models which in turn can reduces V&V time and effort and conversely the V&V helps ensure correctness of models at multiple resolutions. This framework is realized through extending the DEVS-Suite simulator and its applicability demonstrated for exemplar NoC models.Dissertation/ThesisDoctoral Dissertation Computer Science 201

    A formal framework for specification-based embedded real-time system engineering

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Aeronautics and Astronautics, 2008.Includes bibliographical references (v. 2, p. 517-545).The increasing size and complexity of modern software-intensive systems present novel challenges when engineering high-integrity artifacts within aggressive budgetary constraints. Among these challenges, ensuring confidence in the engineered system, through validation and verification activities, represents the high cost item on many projects. The expensive nature of engineering high-integrity systems using traditional approaches can be partly attributed to the lack of analysis facilities during the early phases of the lifecycle, causing the validation and verification activities to begin too late in the engineering lifecycle. Other challenges include the management of complexity, opportunities for reuse without compromising confidence, and the ability to trace system features across lifecycle phases. The use of models as a specification mechanism provides an approach to mitigate complexity through abstraction. Furthermore, if the specification approach has formal underpinnings, the use of models can be leveraged to automate engineering activities such as formal analysis and test case generation. The research presented in this thesis proposes an engineering framework which addresses the high cost of validation and verification activities through specification-based system engineering. More specifically, the framework provides an integrated approach to embedded real-time system engineering which incorporates specification, simulation, formal verification, and test-case generation. The framework aggregates the state-of-the-art in individual software engineering disciplines to provide an end-to-end approach to embedded real-time system engineering. The key aspects of the framework include: * A novel specification language, the Timed Abstract State Machine (TASM) language, which extends the theory of Abstract State Machines (ASM).(cont.) The TASM language is a literate formal specification language which can be applied and multiple levels of abstraction and which can express the three key aspects of embedded real-time systems - function, time, and resources. * Automated verification capabilities achieved through the integration of mature analysis engines, namely the UPPAAL tool suite and the SAT4J SAT solver. The verification capabilities provided by the framework include completeness and consistency verification, model checking, execution time analysis, and resource consumption analysis. * Bi-directional traceability of model features across levels of abstraction and lifecycle phases. Traceability is achieved syntactically through archetypical refinement types; each refinement type provides correctness criteria, which, if met, guarantee semantic integrity through the refinement. * Automated test case generation capabilities for unit testing, integration testing, and regression testing. Unit test cases are generated to achieve TASM specification coverage through the rule coverage criterion. Integration test case generation is achieved through the hierarchical composition of unit test cases. Regression test case generation is achieved by leveraging the bi-directional traceability of model features. The framework is implemented into an integrated tool suite, the TASM toolset, which incorporates the UPPAAL tool suite and the SAT4J SAT solver. The toolset and framework are evaluated through experimentation on three industrial case studies - an automated manufacturing system, a "drive-by-wire" system used at a major automotive manufacturer, and a scripting environment used on the International Space Station.by Martin Ouimet.Ph.D

    Contention in multicore hardware shared resources: Understanding of the state of the art

    Get PDF
    The real-time systems community has over the years devoted considerable attention to the impact on execution timing that arises from contention on access to hardware shared resources. The relevance of this problem has been accentuated with the arrival of multicore processors. From the state of the art on the subject, there appears to be considerable diversity in the understanding of the problem and in the “approach” to solve it. This sparseness makes it difficult for any reader to form a coherent picture of the problem and solution space. This paper draws a tentative taxonomy in which each known approach to the problem can be categorised based on its specific goals and assumptions.Postprint (published version

    Mathematics in Software Reliability and Quality Assurance

    Get PDF
    This monograph concerns the mathematical aspects of software reliability and quality assurance and consists of 11 technical papers in this emerging area. Included are the latest research results related to formal methods and design, automatic software testing, software verification and validation, coalgebra theory, automata theory, hybrid system and software reliability modeling and assessment

    A non-intrusive fault tolerant framework for mission critical real-time systems

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Aeronautics and Astronautics, 2005.Includes bibliographical references (p. 85-87).The need for dependable real-time systems for embedded application is growing, and, at the same time, so does the amount of functionality required from these systems. As testing can only show the presence of errors, not their absence, higher levels of system dependability may be provided by the implementation of mechanisms that can protect the system from faults. We present a framework for the development of fault tolerant mission critical real-time systems that provides a structure for flexible, efficient and deterministic design. The framework leverages three key knowledge domains: firstly, a software concurrency model, the Ada Ravenscar Profile, which guarantees deterministic behavior; secondly, the design of a hardware scheduler, the RavenHaRT kernel, which further provides deadlock free inter-task communication management; and finally, the design of a hardware execution time monitor, the Monitoring Chip, which provides non-intrusive error detection. To increase service dependability, we propose a fault tolerance strategy that uses multiple operating modes to provide system-level handling of timing errors. The hierarchical set of operating modes offers different gracefully degraded levels of guaranteed service. This approach relies on the elements of the framework discussed above and is illustrated through a sample case study of a generic navigation system.by SĂ©bastien Gorelov.S.M

    CONTREX: Design of embedded mixed-criticality CONTRol systems under consideration of EXtra-functional properties

    Get PDF
    The increasing processing power of today’s HW/SW platforms leads to the integration of more and more functions in a single device. Additional design challenges arise when these functions share computing resources and belong to different criticality levels. CONTREX complements current activities in the area of predictable computing platforms and segregation mechanisms with techniques to consider the extra-functional properties, i.e., timing constraints, power, and temperature. CONTREX enables energy efficient and cost aware design through analysis and optimization of these properties with regard to application demands at different criticality levels. This article presents an overview of the CONTREX European project, its main innovative technology (extension of a model based design approach, functional and extra-functional analysis with executable models and run-time management) and the final results of three industrial use-cases from different domain (avionics, automotive and telecommunication).The work leading to these results has received funding from the European Community’s Seventh Framework Programme FP7/2007-2011 under grant agreement no. 611146

    Realizability of embedded controllers: from hybrid models to correct implementations

    Get PDF
    Un controller embedded \ue8 un dispositivo (ovvero, un'opportuna combinazione di componenti hardware e software) che, immerso in un ambiente dinamico, deve reagire alle variazioni ambientali in tempo reale. I controller embedded sono largamente adottati in molti contesti della vita moderna, dall'automotive all'avionica, dall'elettronica di consumo alle attrezzature mediche. La correttezza di tali controller \ue8 indubbiamente cruciale. Per la progettazione e per la verifica di un controller embedded, spesso sorge la necessit\ue0 di modellare un intero sistema che includa sia il controller, sia il suo ambiente circostante. La natura di tale sistema \ue8 ibrido. Esso, infatti, \ue8 ottenuto integrando processi ad eventi discreti (i.e., il controller) e processi a tempo continuo (i.e., l'ambiente). Sistemi di questo tipo sono chiamati cyber-physical (CPS) o sistemi ibridi. Le dinamiche di tali sistemi non possono essere rappresentati efficacemente utilizzando o solo un modello (i.e., rappresentazione) discreto o solo un modello continuo. Diversi tipi di modelli possono sono stati proposti per descrivere i sistemi ibridi. Questi si concentrano su obiettivi diversi: modelli dettagliati sono eccellenti per la simulazione del sistema, ma non sono adatti per la sua verifica; modelli meno dettagliati sono eccellenti per la verifica, ma non sono convenienti per i successivi passi di raffinamento richiesti per la progettazione del sistema, e cos\uec via. Tra tutti questi modelli, gli Automi Ibridi (HA) [8, 77] rappresentano il formalismo pi\uf9 efficace per la simulazione e la verifica di sistemi ibridi. In particolare, un automa ibrido rappresenta i processi ad eventi discreti per mezzo di macchine a stati finiti (FSM), mentre i processi a tempo continuo sono rappresentati mediante variabili "continue" la cui dinamica \ue8 specificata da equazioni differenziali ordinarie (ODE) o loro generalizzazioni (e.g., inclusioni differenziali). Sfortunatamente, a causa della loro particolare semantica, esistono diverse difficolt\ue0 nel raffinare un modello basato su automi ibridi in un modello realizzabile e, di conseguenza, esistono difficolt\ue0 nell'automatizzare il flusso di progettazione di sistemi ibridi a partire da automi ibridi. Gli automi ibridi, infatti, sono considerati dispositivi "perfetti e istantanei". Essi adottano una nozione di tempo e di variabili basata su insiemi "densi" (i.e., l'insieme dei numeri reali). Pertanto, gli automi ibridi possono valutare lo stato (i.e., i valori delle variabili) del sistema in ogni istante, ovvero in ogni infinitesimo di tempo, e con la massima precisione. Inoltre, sono in grado di eseguire computazioni o reagire ad eventi di sincronizzazione in modo istantaneo, andando a cambiare la modalit\ue0 di funzionamento del sistema senza alcun ritardo. Questi aspetti sono convenienti a livello di modellazione, ma nessun dispositivo hardware/software potrebbe implementare correttamente tali comportamenti, indipendentemente dalle sue prestazioni. In altre parole, il controller modellato potrebbe non essere implementabile, ovvero, esso potrebbe non essere realizzabile affatto. Questa tesi affronta questo problema proponendo una metodologia completa e gli strumenti necessari per derivare da modelli basati su automi ibridi, modelli realizzabili e le corrispondenti implementazioni corrette. In un modello realizzabile, il controller analizza lo stato del sistema ad istanti temporali discreti, tipicamente fissati dalla frequenza di clock del processore installato sul dispositivo che implementa il controller. Lo stato del sistema \ue8 dato dai valori delle variabili rilevati dai sensori. Questi valori vengono digitalizzati con precisione finita e propagati al controller che li elabora per decidere se cambiare la modalit\ue0 di funzionamento del sistema. In tal caso, il controller genera segnali che, una volta trasmessi agli attuatori, determineranno il cambiamento della modalit\ue0 di funzionamento del sistema. \uc8 necessario tener presente che i sensori e gli attuatori introducono ritardi che seppur limitati, non possono essere trascurati.An embedded controller is a reactive device (e.g., a suitable combination of hardware and software components) that is embedded in a dynamical environment and has to react to environment changes in real time. Embedded controllers are widely adopted in many contexts of modern life, from automotive to avionics, from consumer electronics to medical equipment. Noticeably, the correctness of such controllers is crucial. When designing and verifying an embedded controller, often the need arises to model the controller and also its surrounding environment. The nature of the obtained system is hybrid because of the inclusion of both discrete-event (i.e., controller) and continuous-time (i.e., environment) processes whose dynamics cannot be characterized faithfully using either a discrete or continuous model only. Systems of this kind are named cyber-physical (CPS) or hybrid systems. Different types of models may be used to describe hybrid systems and they focus on different objectives: detailed models are excellent for simulation but not suitable for verification, high-level models are excellent for verification but not convenient for refinement, and so forth. Among all these models, hybrid automata (HA) [8, 77] have been proposed as a powerful formalism for the design, simulation and verification of hybrid systems. In particular, a hybrid automaton represents discrete-event processes by means of finite state machines (FSM), whereas continuous-time processes are represented by using real-numbered variables whose dynamics is specified by (ordinary) differential equation (ODE) or their generalizations (e.g., differential inclusions). Unfortunately, when the high-level model of the hybrid system is a hybrid automaton, several difficulties should be solved in order to automate the refinement phase in the design flow, because of the classical semantics of hybrid automata. In fact, hybrid automata can be considered perfect and instantaneous devices. They adopt a notion of time and evaluation of continuous variables based on dense sets of values (usually R, i.e., Reals). Thus, they can sample the state (i.e., value assignments on variables) of the hybrid system at any instant in such a dense set R 650. Further, they are capable of instantaneously evaluating guard constraints or reacting to incoming events by performing changes in the operating mode of the hybrid system without any delay. While these aspects are convenient at the modeling level, any model of an embedded controller that relies for its correctness on such precision and instantaneity cannot be implemented by any hardware/software device, no matter how fast it is. In other words, the controller is un-realizable, i.e., un-implementable. This thesis proposes a complete methodology and a framework that allows to derive from hybrid automata proved correct in the hybrid domain, correct realizable models of embedded controllers and the related discrete implementations. In a realizable model, the controller samples the state of the environment at periodic discrete time instants which, typically, are fixed by the clock frequency of the processor implementing the controller. The state of the environment consists of the current values of the relevant variables as observed by the sensors. These values are digitized with finite precision and reported to the controller that may decide to switch the operating mode of the environment. In such a case, the controller generates suitable output signals that, once transmitted to the actuators, will effect the desired change in the operating mode. It is worth noting that the sensors will report the current values of the variables and the actuators will effect changes in the rates of evolution of the variables with bounded delays

    Modélisation à haut niveau d'abstraction pour les systèmes embarqués

    No full text
    Modern embedded systems have reached a level of complexity such that it is no longer possible to wait for the first physical prototypes to validate choices on the integration of hardware and software components. It is necessary to use models, early in the design flow. The work presented in this document contribute to the state of the art in several domains. First, we present some verification techniques based on abstract interpretation and SMT-solving for programs written in general-purpose languages like C, C++ or Java. Then, we use verification tools on models written in SystemC at the transaction level (TLM). Several approaches are presented, most of them using compilation techniques specific to SystemC to turn the models into a format usable by existing tools. The second part of the document deal with non-functional properties of models: timing performances, power consumption and temperature. In the context of TLM, we show how functional models can be enriched with non-functional information. Finally, we present contributions to the modular performance analysis (MPA) with real-time calculus (RTC) framework. We describe several ways to connect RTC to more expressive formalisms like timed automata and the synchronous language Lustre. These connections raise the problem of causality, which is defined formally and solved with the new causality closure algorithm.Les systèmes embarqués modernes ont atteint un niveau de complexité qui fait qu'il n'est plus possible d'attendre les premiers prototypes physiques pour valider les décisions sur l'intégration des composants matériels et logiciels. Il est donc nécessaire d'utiliser des modèles, tôt dans le flot de conception. Les travaux présentés dans ce document contribuent à l'état de l'art dans plusieurs domaines. Nous présentons dans un premier temps de nouvelles techniques de vérification de programmes écrits dans des langages généralistes comme C, C++ ou Java. Dans un second temps, nous utilisons des outils de vérification formelle sur des modèles écrits en SystemC au niveau transaction (TLM). Plusieurs approches sont présentées, la plupart d'entre elles utilisent des techniques de compilations spécifiques à SystemC pour transformer le programme SystemC en un format utilisable par les outils. La seconde partie du document s'intéresse aux propriétés non-fonctionnelles des modèles~: performances temporelles, consommation électrique et température. Dans le contexte de la modélisation TLM, nous proposons plusieurs techniques pour enrichir des modèles fonctionnels avec des informations non-fonctionnelles. Enfin, nous présentons les contributions faites à l'analyse de performance modulaire (MPA) avec le calcul temps-réel (RTC). Nous proposons plusieurs connections entre ces modèles analytiques et des formalismes plus expressifs comme les automates temporisés et le langage de programmation Lustre. Ces connexion posent le problème théorique de la causalité, qui est formellement défini et résolu avec un algorithme nouveau dit de " fermeture causale "

    Timed model-based programming : executable specifications for robust mission-critical sequences

    Get PDF
    Thesis (Sc. D.)--Massachusetts Institute of Technology, Dept. of Aeronautics and Astronautics, 2003.Includes bibliographical references (p. 195-204).There is growing demand for high-reliability embedded systems that operate robustly and autonomously in the presence of tight real-time constraints. For robotic spacecraft, robust plan execution is essential during time-critical mission sequences, due to the very short time available for recovery from anomalies. Traditional approaches to encoding these sequences can lead to brittle behavior under off-nominal execution conditions, due to the high level of complexity in the control specification required to manage the complex spacecraft system interactions. This work describes timed model-based programming, a novel approach for encoding and robustly executing mission-critical spacecraft sequences. The timed model-based programming approach addresses the issues of sequence complexity and unanticipated low-level system interactions by allowing control programs to directly read or write "hidden" states of the plant, that is, states that are not directly observable or controllable. It is then the responsibility of the program's execution kernel to map between hidden states and the plant sensors and control variables. This mapping is performed automatically by a deductive controller using a common-sense plant model, freeing the programmer from the error-prone process of reasoning through a complex set of interactions under a range of possible failure situations. Time is central to the execution of mission-critical sequences; a robust executive must consider time in its control and behavior models, in addition to reactively managing complexity.(cont.) In timed model-based programming, control programs express goals and constraints in terms of both system state and time. Plant models capture the underlying behavior of the system components, including nominal and off-nominal modes, probabilistic transitions, and timed effects such as state transition latency. The contributions of this work are threefold. First, a semantic specification of the timed model-based programming approach is provided. The execution semantics of a timed model-based program are defined in terms of legal state evolutions of a physical plant, represented as a factored Partially Observable Semi-Markov Decision Process. The second contribution is the definition of graphical and textual languages for encoding timed control programs and plant models. The adoption of a visual programming paradigm allows timed model-based programs to be specified and readily inspected by the systems engineers in charge of designing the mission-critical sequences. The third contribution is the development of a Timed Model-based Executive, which takes as input a timed control program and executes it, using timed plant models to track states, diagnose faults and generate control actions. The Timed Model-based Executive has been implemented and demonstrated on a representative spacecraft scenario for Mars entry, descent and landing.by Michel Donald Ingham.Sc.D

    Verification and validation of UML and SysML based systems engineering design models

    Get PDF
    In this thesis, we address the issue of model-based verification and validation of systems engineering design models expressed using UML/SysML. The main objectives are to assess the design from its structural and behavioral perspectives and to enable a qualitative as well as a quantitative appraisal of its conformance with respect to its requirements and a set of desired properties. To this end, we elaborate a heretofore unattempted unified approach composed of three well-established techniques that are model-checking, static analysis, and software engineering metrics. These techniques are synergistically combined so that they yield a comprehensive and enhanced assessment. Furthermore, we propose to extend this approach with performance analysis and probabilistic assessment of SysML activity diagrams. Thus, we devise an algorithm that systematically maps these diagrams into their corresponding probabilistic models encoded using the specification language of the probabilistic symbolic model-checker PRISM. Moreover, we define a first of its kind probabilistic calculus, namely activity calculus, dedicated to capture the essence of SysML activity diagrams and its underlying operational semantics in terms of Markov decision processes. Furthermore, we propose a formal syntax and operational semantics for the input language of PRISM. Finally, we mathematically prove the soundness of our translation algorithm with respect to the devised operational semantics using a simulation preorder defined upon Markov decision processes
    corecore