414 research outputs found

    Analysis as first-class citizens – an application to Architecture Description Languages

    Get PDF
    Architecture Description Languages (ADLs) support modeling and analysis of systems through models transformation and exploration. Various contributions made proposals to bring verification capabilities to designers through model-based frame- works and illustrated benefits to the overall system quality. Model-level analyses are usually performed as an exogenous, unidirectional and semantically weak transformation towards a third-party model. We claim such process can be incomplete and/or inefficient because gathered results lead to evolution of the primary model. This is particularly problematic for the design of Distributed Real-Time Embedded (DRE) systems that has to tackle many concerns like time, security or safety. In this paper, we argue why analysis should no longer be considered as a side step in the design process but, rather, should be embedded as a first-class citizen in the model itself. We review several standardized architecture description languages, which consider analysis as a goal. As an element of solution, we introduce current work on the definition of a language dedicated to the analysis of models within the scope of one particular ADL, namely the Architecture Analysis and Design Language (AADL)

    Formal verification of automotive embedded UML designs

    Get PDF
    Software applications are increasingly dominating safety critical domains. Safety critical domains are domains where the failure of any application could impact human lives. Software application safety has been overlooked for quite some time but more focus and attention is currently directed to this area due to the exponential growth of software embedded applications. Software systems have continuously faced challenges in managing complexity associated with functional growth, flexibility of systems so that they can be easily modified, scalability of solutions across several product lines, quality and reliability of systems, and finally the ability to detect defects early in design phases. AUTOSAR was established to develop open standards to address these challenges. ISO-26262, automotive functional safety standard, aims to ensure functional safety of automotive systems by providing requirements and processes to govern software lifecycle to ensure safety. Each functional system needs to be classified in terms of safety goals, risks and Automotive Safety Integrity Level (ASIL: A, B, C and D) with ASIL D denoting the most stringent safety level. As risk of the system increases, ASIL level increases and the standard mandates more stringent methods to ensure safety. ISO-26262 mandates that ASILs C and D classified systems utilize walkthrough, semi-formal verification, inspection, control flow analysis, data flow analysis, static code analysis and semantic code analysis techniques to verify software unit design and implementation. Ensuring software specification compliance via formal methods has remained an academic endeavor for quite some time. Several factors discourage formal methods adoption in the industry. One major factor is the complexity of using formal methods. Software specification compliance in automotive remains in the bulk heavily dependent on traceability matrix, human based reviews, and testing activities conducted on either actual production software level or simulation level. ISO26262 automotive safety standard recommends, although not strongly, using formal notations in automotive systems that exhibit high risk in case of failure yet the industry still heavily relies on semi-formal notations such as UML. The use of semi-formal notations makes specification compliance still heavily dependent on manual processes and testing efforts. In this research, we propose a framework where UML finite state machines are compiled into formal notations, specification requirements are mapped into formal model theorems and SAT/SMT solvers are utilized to validate implementation compliance to specification. The framework will allow semi-formal verification of AUTOSAR UML designs via an automated formal framework backbone. This semi-formal verification framework will allow automotive software to comply with ISO-26262 ASIL C and D unit design and implementation formal verification guideline. Semi-formal UML finite state machines are automatically compiled into formal notations based on Symbolic Analysis Laboratory formal notation. Requirements are captured in the UML design and compiled automatically into theorems. Model Checkers are run against the compiled formal model and theorems to detect counterexamples that violate the requirements in the UML model. Semi-formal verification of the design allows us to uncover issues that were previously detected in testing and production stages. The methodology is applied on several automotive systems to show how the framework automates the verification of UML based designs, the de-facto standard for automotive systems design, based on an implicit formal methodology while hiding the cons that discouraged the industry from using it. Additionally, the framework automates ISO-26262 system design verification guideline which would otherwise be verified via human error prone approaches

    Traceability of Requirements and Software Architecture for Change Management

    Get PDF
    At the present day, software systems get more and more complex. The requirements of software systems change continuously and new requirements emerge frequently. New and/or modified requirements are integrated with the existing ones, and adaptations to the architecture and source code of the system are made. The process of integration of the new/modified requirements and adaptations to the software system is called change management. The size and complexity of software systems make change management costly and time consuming. To reduce the cost of changes, it is important to apply change management as early as possible in the software development cycle. Requirements traceability is considered crucial in change management for establishing and maintaining consistency between software development artifacts. It is the ability to link requirements back to stakeholders’ rationales and forward to corresponding design artifacts, code, and test cases. When changes for the requirements of the software system are proposed, the impact of these changes on other requirements, design elements and source code should be traced in order to determine parts of the software system to be changed. Determining the impact of changes on the parts of development artifacts is called change impact analysis. Change impact analysis is applicable to many development artifacts like requirements documents, detailed design, source code and test cases. Our focus is change impact analysis in requirements and software architecture. The need for change impact analysis is observed in both requirements and software architecture. When a change is introduced to a requirement, the requirements engineer needs to find out if any other requirement related to the changed requirement is impacted. After determining the impacted requirements, the software architect needs to identify the impacted architectural elements by tracing the changed requirements to software architecture. It is hard, expensive and error prone to manually trace impacted requirements and architectural elements from the changed requirements. There are tools and approaches that automate change impact analysis like IBM Rational RequisitePro and DOORS. In most of these tools, traces are just simple relations and their semantics is not considered. Due to the lack of semantics of traces in these tools, all requirements and architectural elements directly or indirectly traced from the changed requirement are candidate impacted. The requirements engineer has to inspect all these candidate impacted requirements and architectural elements to identify changes if there are any. In this thesis we address the following problems which arise in performing change impact analysis for requirements and software architecture. Explosion of impacts in requirements after a change in requirements. In practice, requirements documents are often textual artifacts with implicit structure. Most of the relations among requirements are not given explicitly. There is a lack of precise definition of relations among requirements in most tools and approaches. Due to the lack of semantics of requirements relations, change impact analysis may produce high number of false positive and false negative impacted requirements. A requirements engineer may have to analyze all requirements in the requirements document for a single change. This may result in neglecting the actual impact of a change. Manual, expensive and error prone trace establishment. Considerable research has been devoted to relating requirements and design artifacts with source code. Less attention has been paid to relating Requirements (R) with Architecture (A) by using well-defined semantics of traces. Designing architecture based on requirements is a problem solving process that relies on human experience and creativity, and is mainly manual. The software architect may need to manually assign traces between R&A. Manual trace assignment is time-consuming, expensive and error prone. The assigned traces might be incomplete and invalid. Explosion of impacts in software architecture after a change in requirements. Due to the lack of semantics of traces between R&A, change impact analysis may produce high number of false positive and false negative impacted architectural elements. A software architect may have to analyze all architectural elements in the architecture for a single requirements change. In this thesis we propose an approach that reduces the explosion of impacts in R&A. The approach employs semantic information of traces and is supported by tools. We consider that every relation between software development artifacts or between elements in these artifacts can play the role of a trace for a certain traceability purpose like change impact analysis. We choose Model Driven Engineering (MDE) as a solution platform for our approach. MDE provides a uniform treatment of software artifacts (e.g. requirements documents, software design and test documents) as models. It also enables using different formalisms to reason about development artifacts described as models. To give an explicit structure to requirements documents and treat requirements, architecture and traces in a uniform way, we use metamodels and models with formally defined semantics. The thesis provides the following contributions: A modeling language for definition of requirements models with formal semantics. The language is defined according to the MDE principles by defining a metamodel. It is based on a survey about the most commonly found requirements types and relation types. With this language, the requirements engineer can explicitly specify the requirements and the relations among them. The semantics of these entities is given in First Order Logic (FOL) and allows two activities. First, new relations among requirements can be inferred from the initial set of relations. Second, requirements models can be automatically checked for consistency of the relations. Tool for Requirements Inferencing and Consistency Checking (TRIC) is developed to support both activities. The defined semantics is used in a technique for change impact analysis in requirements models. A change impact analysis technique for requirements using semantics of requirements relations and requirements change types. The technique aims at solving the problem of explosion of impacts in requirements when semantics of requirements relations is missing. The technique uses formal semantics of requirements relations and requirements change types. A classification of requirements changes based on the structure of a textual requirement is given and formalized. The semantics of requirements change types is based on FOL. We support three activities for impact analysis. First, the requirements engineer proposes changes according to the change classification before implementing the actual changes. Second, the requirements engineer indentifies the propagation of the changes to related requirements. The change alternatives in the propagation are determined based on the semantics of change types and requirements relations. Third, possible contradicting changes are identified. We extend TRIC with a support for these activities. The tool automatically determines the change propagation paths, checks the consistency of the changes, and suggests alternatives for implementing the change. A technique that provides trace establishment between R&A by using architecture verification and semantics of traces. It is hard, expensive and error prone to manually establish traces between R&A. We present an approach that provides trace establishment by using architecture verification together with semantics of requirements relations and traces. We use a trace metamodel with commonly used trace types. The semantics of traces is formalized in FOL. Software architectures are expressed in the Architecture Analysis and Design Language (AADL). AADL is provided with a formal semantics expressed in Maude. The Maude tool set allows simulation and verification of architectures. The first way to establish traces is to use architecture verification techniques. A given requirement is reformulated as a property in terms of the architecture. The architecture is executed and a state space is produced. This execution simulates the behavior of the system on the architectural level. The property derived from the requirement is checked by the Maude model checker. Traces are generated between the requirement and the architectural components used in the verification of the property. The second way to establish traces is to use the requirements relations together with the semantics of traces. Requirements relations are reflected in the connections among the traced architectural elements based on the semantics of traces. Therefore, new traces are inferred from existing traces by using requirements relations. We use semantics of requirements relations and traces to both generate/validate traces and generate/validate requirements relations. There is a tool support for our approach. The tool provides the following: (1) generation/validation of traces by using requirements relations and/or verification of architecture, (2) generation/validation of requirements relations by using traces. A change impact analysis technique for software architecture using architecture verification and semantics of traces between R&A. The software architect needs to identify the impacted architectural elements after requirements change. We present a change impact analysis technique for software architecture using architecture verification and semantics of traces. The technique is semi-automatic and requires participation of the software architect. Our technique has two parts. The first part is to identify the architectural elements that implement the system properties to which proposed requirements changes are introduced. By having the formal semantics of requirements relations and traces, we identify which parts of software architecture are impacted by a proposed change in requirements. We have extended TRIC for determining candidate impacted architectural elements. The second part of our technique is to propose possible changes for software architecture when the software architecture does not satisfy the new and/or changed requirements. The technique is based on architecture verification. The output of verification is a counter example if the requirements are not satisfied. The counter example is used with a classification of architectural changes in order to propose changes in the software architecture. These changes produce a new version of the architecture that possibly satisfies the new or the changed requirements

    NASA SBIR abstracts of 1991 phase 1 projects

    Get PDF
    The objectives of 301 projects placed under contract by the Small Business Innovation Research (SBIR) program of the National Aeronautics and Space Administration (NASA) are described. These projects were selected competitively from among proposals submitted to NASA in response to the 1991 SBIR Program Solicitation. The basic document consists of edited, non-proprietary abstracts of the winning proposals submitted by small businesses. The abstracts are presented under the 15 technical topics within which Phase 1 proposals were solicited. Each project was assigned a sequential identifying number from 001 to 301, in order of its appearance in the body of the report. Appendixes to provide additional information about the SBIR program and permit cross-reference of the 1991 Phase 1 projects by company name, location by state, principal investigator, NASA Field Center responsible for management of each project, and NASA contract number are included

    Automated Validation of State-Based Client-Centric Isolation with TLA <sup>+</sup>

    Get PDF
    Clear consistency guarantees on data are paramount for the design and implementation of distributed systems. When implementing distributed applications, developers require approaches to verify the data consistency guarantees of an implementation choice. Crooks et al. define a state-based and client-centric model of database isolation. This paper formalizes this state-based model in, reproduces their examples and shows how to model check runtime traces and algorithms with this formalization. The formalized model in enables semi-automatic model checking for different implementation alternatives for transactional operations and allows checking of conformance to isolation levels. We reproduce examples of the original paper and confirm the isolation guarantees of the combination of the well-known 2-phase locking and 2-phase commit algorithms. Using model checking this formalization can also help finding bugs in incorrect specifications. This improves feasibility of automated checking of isolation guarantees in synthesized synchronization implementations and it provides an environment for experimenting with new designs.</p

    Optimisation des opérations du système auxiliaire électrique d’un vraquier de taille "handysize"

    Get PDF
    L’industrie maritime transporte 80% des marchandises mondiales. De plus, c’est le mode de transportation le plus efficace en termes d’émission de dioxyde de carbone (CO2) par tonne- kilomètre de cargaison transportée. Cependant, cette recherche montre qu’un vraquier de taille handysize émet au minimum l’équivalent de 650 voitures légères par jour par navire. En effet, les navires marchands ont besoin d’électricité pour faire fonctionner leurs machineries lourdes comme les grues de pont. L’électricité est aussi nécessaire pour les essentielles et les services généraux comme l’éclairage, les cuisines, les ordinateurs de bord, les électroménagers, etc. Cette consommation d’énergie peut être comparée à celle d’une petite ville. Historiquement, l’électricité a toujours été générée par des génératrices au diesel sur des navires marchands. Cependant, il serait plus efficace et environnemental de connecter le navire au réseau électrique disponible sur terre. Ce processus s’appelle l’alimentation à quai. Plusieurs techniques existent pour réaliser cette connexion, mais elles sont souvent très dispendieuses et comportent de nombreux défis. Ce document présente une revue étendue de littérature sur la décarbonation de l’industrie maritime et propose l’alimentation à quai comme la prochaine étape vers une industrie maritime verte. Une analyse forces, faiblesses, opportunités et menaces (FFOM) de l’alimentation à quai est discutée. Une étude de l’impact de l’alimentation à quai sur un vraquier sec montre que l’indicateur d’intensité de carbone (IIC) de l’Organisation Maritime International (OMI) peut être réduit de 7,8%. Ensuite, une étude multi objectif a permis d’identifier les meilleures solutions pour éliminer les émissions des navires au port. L’étude de cas fondé sur des vrais profile de consommation d’énergie a révélé qu’une connexion basse tension combinée à une petite batterie de 60 kWh peut éliminer les émissions du navire au port. Cette solution réduirait les émissions de 5,5 tonnes de CO2 par jour par bateau pour un investissement initial de 323 k.Lesysteˋmedegestiondesca^blespourraitaussie^treinstalleˊaˋquaipluto^tquesurlenavirediminuantainsilesfraispourlesarmateursdemoitieˊ.Uneanalysetechnico−eˊconomiquemontrequeleprojetauraitunretoursurl’investissementde15anspouvante^trereˊduitaˋ6anssileprojeteˊtaitsubventionneˊaˋ50Finalement,lepotentieldel’alimentationaˋquaipourreˊduiredeseˊmissionsdegazaˋeffetdeserre(GES)surlaroutecommercialeduSaint−LaurentetdesGrandsLacsestimportant.Deplus,lesbeˊneˊficesdel’alimentationaˋquaisontlargementaugmenteˊsgra^ceaucou^tabordabledel’eˊlectriciteˊauQueˊbec.Alorsquelefuturdeladeˊcarbonationdel’industriemaritimen’estpasencoredeˊfinie,l’alimentationaˋquaiestunemesurequipeute^treimpleˊmenteˊedeˋsmaintenantetquipeutaˋcoupsu^rdiminuerleseˊmissionsdeCO2delamarinemarchande.Abstract:Theshippingindustrycarries80. Le système de gestion des câbles pourrait aussi être installé à quai plutôt que sur le navire diminuant ainsi les frais pour les armateurs de moitié. Une analyse technico-économique montre que le projet aurait un retour sur l’investissement de 15 ans pouvant être réduit à 6 ans si le projet était subventionné à 50%. Finalement, le potentiel de l’alimentation à quai pour réduire des émissions de gaz à effet de serre (GES) sur la route commerciale du Saint-Laurent et des Grands Lacs est important. De plus, les bénéfices de l’alimentation à quai sont largement augmentés grâce au coût abordable de l’électricité au Québec. Alors que le futur de la décarbonation de l’industrie maritime n’est pas encore définie, l’alimentation à quai est une mesure qui peut être implémentée dès maintenant et qui peut à coup sûr diminuer les émissions de CO2 de la marine marchande.Abstract: The shipping industry carries 80% of worldwide commerce. Furthermore, it is the most efficient means of transportation in terms of emissions of carbon dioxide (CO2) per ton-kilometer of transported cargo. However, this research shows that a handysize dry bulk carrier emits at least the equivalent of 650 light cars per day per ships. Indeed, ships need electricity to keep heavy equipment working like their deck cranes. They also need it for essential and crew services such as lighting, cooking, computers, laundry, etc. This energy consumption can be compared to the one of small cities. Historically, the electricity of such a ship always had been generated by the on-board diesel generators (DG). Yet, a more efficient and environmentally friendly way to supply electricity to the ship would be to connect the ship to the onshore electrical network. This process is referred to as shore power, cold ironing (CI), alternative marine power (AMP), onshore power supply (OPS) or shore-to-ship (SSP). Many ways exist to perform this connection; however, they can be extremely expensive and include many challenges. This document presents an extensive literature review of shipping decarbonization and proposes shore power as the next step toward green shipping. The strengths, weakness, opportunities, and threats (SWOT) analysis of shore power in discussed and a policy impact study of shore power on a dry bulk carrier showed that the carbon intensity index (CII) of the International Maritime Organization (IMO) could be reduced of 7.8% with shore power. Then, a multi-objective approach permitted to find the best solution to eliminate the emissions from the ship in port. The test case with real load profiles revealed that a low voltage connection combined with a small battery of 60 kWh can eliminate emissions from the ship in port. This solution reduces the carbon emissions of 5.5 tons of CO2 per day per ship for a capital expenditure (CAPEX) of 323 k. If the cable management system was installed on shore instead of on the ship, the CAPEX could be reduced by half for shipowners. A technical-economic study estimated a payback period of 15 years that could be reduced to 6 years if the project was subsidized at 50%. Finally, the potential of shore power to reduce greenhouse gas (GHG) emission in the St. Lawrence River and Great Lakes maritime route is massive and the affordable electricity price in Quebec further increases the benefits of shore power. While the future of maritime decarbonization is unclear, shore power is a measure that can be implemented by now and in which the benefits it can achieve in terms of emission reductions are guaranteed for maritime transportation

    Data-driven resiliency assessment of medical cyber-physical systems

    Get PDF
    Advances in computing, networking, and sensing technologies have resulted in the ubiquitous deployment of medical cyber-physical systems in various clinical and personalized settings. The increasing complexity and connectivity of such systems, the tight coupling between their cyber and physical components, and the inevitable involvement of human operators in supervision and control have introduced major challenges in ensuring system reliability, safety, and security. This dissertation takes a data-driven approach to resiliency assessment of medical cyber-physical systems. Driven by large-scale studies of real safety incidents involving medical devices, we develop techniques and tools for (i) deeper understanding of incident causes and measurement of their impacts, (ii) validation of system safety mechanisms in the presence of realistic hazard scenarios, and (iii) preemptive real-time detection of safety hazards to mitigate adverse impacts on patients. We present a framework for automated analysis of structured and unstructured data from public FDA databases on medical device recalls and adverse events. This framework allows characterization of the safety issues originated from computer failures in terms of fault classes, failure modes, and recovery actions. We develop an approach for constructing ontology models that enable automated extraction of safety-related features from unstructured text. The proposed ontology model is defined based on device-specific human-in-the-loop control structures in order to facilitate the systems-theoretic causality analysis of adverse events. Our large-scale analysis of FDA data shows that medical devices are often recalled because of failure to identify all potential safety hazards, use of safety mechanisms that have not been rigorously validated, and limited capability in real-time detection and automated mitigation of hazards. To address those problems, we develop a safety hazard injection framework for experimental validation of safety mechanisms in the presence of accidental failures and malicious attacks. To reduce the test space for safety validation, this framework uses systems-theoretic accident causality models in order to identify the critical locations within the system to target software fault injection. For mitigation of safety hazards at run time, we present a model-based analysis framework that estimates the consequences of control commands sent from the software to the physical system through real-time computation of the system’s dynamics, and preemptively detects if a command is unsafe before its adverse consequences manifest in the physical system. The proposed techniques are evaluated on a real-world cyber-physical system for robot-assisted minimally invasive surgery and are shown to be more effective than existing methods in identifying system vulnerabilities and deficiencies in safety mechanisms as well as in preemptive detection of safety hazards caused by malicious attacks

    Surgeon Training in Telerobotic Surgery via a Hardware-in-the-Loop Simulator

    Get PDF

    Explanation of the Model Checker Verification Results

    Get PDF
    Immer wenn neue Anforderungen an ein System gestellt werden, müssen die Korrektheit und Konsistenz der Systemspezifikation überprüft werden, was in der Praxis in der Regel manuell erfolgt. Eine mögliche Option, um die Nachteile dieser manuellen Analyse zu überwinden, ist das sogenannte Contract-Based Design. Dieser Entwurfsansatz kann den Verifikationsprozess zur Überprüfung, ob die Anforderungen auf oberster Ebene konsistent verfeinert wurden, automatisieren. Die Verifikation kann somit iterativ durchgeführt werden, um die Korrektheit und Konsistenz des Systems angesichts jeglicher Änderung der Spezifikationen sicherzustellen. Allerdings ist es aufgrund der mangelnden Benutzerfreundlichkeit und der Schwierigkeiten bei der Interpretation von Verifizierungsergebnissen immer noch eine Herausforderung, formale Ansätze in der Industrie einzusetzen. Stellt beispielsweise der Model Checker bei der Verifikation eine Inkonsistenz fest, generiert er ein Gegenbeispiel (Counterexample) und weist gleichzeitig darauf hin, dass die gegebenen Eingabespezifikationen inkonsistent sind. Hier besteht die gewaltige Herausforderung darin, das generierte Gegenbeispiel zu verstehen, das oft sehr lang, kryptisch und komplex ist. Darüber hinaus liegt es in der Verantwortung der Ingenieurin bzw. des Ingenieurs, die inkonsistente Spezifikation in einer potenziell großen Menge von Spezifikationen zu identifizieren. Diese Arbeit schlägt einen Ansatz zur Erklärung von Gegenbeispielen (Counterexample Explanation Approach) vor, der die Verwendung von formalen Methoden vereinfacht und fördert, indem benutzerfreundliche Erklärungen der Verifikationsergebnisse der Ingenieurin bzw. dem Ingenieur präsentiert werden. Der Ansatz zur Erklärung von Gegenbeispielen wird mittels zweier Methoden evaluiert: (1) Evaluation anhand verschiedener Anwendungsbeispiele und (2) eine Benutzerstudie in Form eines One-Group Pretest-Posttest Experiments.Whenever new requirements are introduced for a system, the correctness and consistency of the system specification must be verified, which is often done manually in industrial settings. One viable option to traverse disadvantages of this manual analysis is to employ the contract-based design, which can automate the verification process to determine whether the refinements of top-level requirements are consistent. Thus, verification can be performed iteratively to ensure the system’s correctness and consistency in the face of any change in specifications. Having said that, it is still challenging to deploy formal approaches in industries due to their lack of usability and their difficulties in interpreting verification results. For instance, if the model checker identifies inconsistency during the verification, it generates a counterexample while also indicating that the given input specifications are inconsistent. Here, the formidable challenge is to comprehend the generated counterexample, which is often lengthy, cryptic, and complex. Furthermore, it is the engineer’s responsibility to identify the inconsistent specification among a potentially huge set of specifications. This PhD thesis proposes a counterexample explanation approach for formal methods that simplifies and encourages their use by presenting user-friendly explanations of the verification results. The proposed counterexample explanation approach identifies and explains relevant information from the verification result in what seems like a natural language statement. The counterexample explanation approach extracts relevant information by identifying inconsistent specifications from among the set of specifications, as well as erroneous states and variables from the counterexample. The counterexample explanation approach is evaluated using two methods: (1) evaluation with different application examples, and (2) a user-study known as one-group pretest and posttest experiment
    • …
    corecore