177 research outputs found

    PRISE: An Integrated Platform for Research and Teaching of Critical Embedded Systems

    Get PDF
    In this paper, we present PRISE, an integrated workbench for Research and Teaching of critical embedded systems at ISAE, the French Institute for Space and Aeronautics Engineering. PRISE is built around state-of-the-art technologies for the engineering of space and avionics systems used in Space and Avionics domain. It aims at demonstrating key aspects of critical, real-time, embedded systems used in the transport industry, but also validating new scientific contributions for the engineering of software functions. PRISE combines embedded and simulation platforms, and modeling tools. This platform is available for both research and teaching. Being built around widely used commercial and open source software; PRISE aims at being a reference platform for our teaching and research activities at ISAE

    Mapping AADL to Petri Net Tool-Sets Using PNML Framework

    Get PDF
    Architecture Analysis and Design Language (AADL) has been utilized to specify and verify non- functional properties of Real-Time Embedded Systems (RTES) used in critical application systems. Examples of such critical application systems include medical devices, nuclear power plants, aer- ospace, financial, etc. Using AADL, an engineer is enable to analyze the quality of a system. For example, a developer can perform performance analysis such as end-to-end flow analysis to guarantee that system components have the required resources to meet the timing requirements relevant to their communications. The critical issue related to developing and deploying safety critical systems is how to validate the expected level of quality (e.g., safety, performance, security) and functionalities (capabilities) at design level. Currently, the core AADL is extensively applied to analyze and verify quality of RTES embed in the safety critical applications. The notation lacks the formal semantics needed to reason about the logical properties (e.g., deadlock, livelock, etc.) and capabilities of safety critical systems. The objective of this research is to augment AADL with exit- ing formal semantics and supporting tools in a manner that these properties can be automatically verified. Toward this goal, we exploit Petri Net Markup Language (PNML), which is a standard act- ing as the intermediate language between different classes of Petri Nets. Using PNML, we interface AADL with different classes of Petri nets, which support different types of tools and reasoning. The justification for using PNML is that the framework provides a context in which interoperability and exchangeability among different models of a system specified by different types of Petri nets is possible. The contributions of our work include a set of mappings and mapping rules between AADL and PNML. To show the feasibility of our approach, a fragment of RT-Embedded system, namely, Cruise Control System has been used

    Traceability of Requirements and Software Architecture for Change Management

    Get PDF
    At the present day, software systems get more and more complex. The requirements of software systems change continuously and new requirements emerge frequently. New and/or modified requirements are integrated with the existing ones, and adaptations to the architecture and source code of the system are made. The process of integration of the new/modified requirements and adaptations to the software system is called change management. The size and complexity of software systems make change management costly and time consuming. To reduce the cost of changes, it is important to apply change management as early as possible in the software development cycle. Requirements traceability is considered crucial in change management for establishing and maintaining consistency between software development artifacts. It is the ability to link requirements back to stakeholders’ rationales and forward to corresponding design artifacts, code, and test cases. When changes for the requirements of the software system are proposed, the impact of these changes on other requirements, design elements and source code should be traced in order to determine parts of the software system to be changed. Determining the impact of changes on the parts of development artifacts is called change impact analysis. Change impact analysis is applicable to many development artifacts like requirements documents, detailed design, source code and test cases. Our focus is change impact analysis in requirements and software architecture. The need for change impact analysis is observed in both requirements and software architecture. When a change is introduced to a requirement, the requirements engineer needs to find out if any other requirement related to the changed requirement is impacted. After determining the impacted requirements, the software architect needs to identify the impacted architectural elements by tracing the changed requirements to software architecture. It is hard, expensive and error prone to manually trace impacted requirements and architectural elements from the changed requirements. There are tools and approaches that automate change impact analysis like IBM Rational RequisitePro and DOORS. In most of these tools, traces are just simple relations and their semantics is not considered. Due to the lack of semantics of traces in these tools, all requirements and architectural elements directly or indirectly traced from the changed requirement are candidate impacted. The requirements engineer has to inspect all these candidate impacted requirements and architectural elements to identify changes if there are any. In this thesis we address the following problems which arise in performing change impact analysis for requirements and software architecture. Explosion of impacts in requirements after a change in requirements. In practice, requirements documents are often textual artifacts with implicit structure. Most of the relations among requirements are not given explicitly. There is a lack of precise definition of relations among requirements in most tools and approaches. Due to the lack of semantics of requirements relations, change impact analysis may produce high number of false positive and false negative impacted requirements. A requirements engineer may have to analyze all requirements in the requirements document for a single change. This may result in neglecting the actual impact of a change. Manual, expensive and error prone trace establishment. Considerable research has been devoted to relating requirements and design artifacts with source code. Less attention has been paid to relating Requirements (R) with Architecture (A) by using well-defined semantics of traces. Designing architecture based on requirements is a problem solving process that relies on human experience and creativity, and is mainly manual. The software architect may need to manually assign traces between R&A. Manual trace assignment is time-consuming, expensive and error prone. The assigned traces might be incomplete and invalid. Explosion of impacts in software architecture after a change in requirements. Due to the lack of semantics of traces between R&A, change impact analysis may produce high number of false positive and false negative impacted architectural elements. A software architect may have to analyze all architectural elements in the architecture for a single requirements change. In this thesis we propose an approach that reduces the explosion of impacts in R&A. The approach employs semantic information of traces and is supported by tools. We consider that every relation between software development artifacts or between elements in these artifacts can play the role of a trace for a certain traceability purpose like change impact analysis. We choose Model Driven Engineering (MDE) as a solution platform for our approach. MDE provides a uniform treatment of software artifacts (e.g. requirements documents, software design and test documents) as models. It also enables using different formalisms to reason about development artifacts described as models. To give an explicit structure to requirements documents and treat requirements, architecture and traces in a uniform way, we use metamodels and models with formally defined semantics. The thesis provides the following contributions: A modeling language for definition of requirements models with formal semantics. The language is defined according to the MDE principles by defining a metamodel. It is based on a survey about the most commonly found requirements types and relation types. With this language, the requirements engineer can explicitly specify the requirements and the relations among them. The semantics of these entities is given in First Order Logic (FOL) and allows two activities. First, new relations among requirements can be inferred from the initial set of relations. Second, requirements models can be automatically checked for consistency of the relations. Tool for Requirements Inferencing and Consistency Checking (TRIC) is developed to support both activities. The defined semantics is used in a technique for change impact analysis in requirements models. A change impact analysis technique for requirements using semantics of requirements relations and requirements change types. The technique aims at solving the problem of explosion of impacts in requirements when semantics of requirements relations is missing. The technique uses formal semantics of requirements relations and requirements change types. A classification of requirements changes based on the structure of a textual requirement is given and formalized. The semantics of requirements change types is based on FOL. We support three activities for impact analysis. First, the requirements engineer proposes changes according to the change classification before implementing the actual changes. Second, the requirements engineer indentifies the propagation of the changes to related requirements. The change alternatives in the propagation are determined based on the semantics of change types and requirements relations. Third, possible contradicting changes are identified. We extend TRIC with a support for these activities. The tool automatically determines the change propagation paths, checks the consistency of the changes, and suggests alternatives for implementing the change. A technique that provides trace establishment between R&A by using architecture verification and semantics of traces. It is hard, expensive and error prone to manually establish traces between R&A. We present an approach that provides trace establishment by using architecture verification together with semantics of requirements relations and traces. We use a trace metamodel with commonly used trace types. The semantics of traces is formalized in FOL. Software architectures are expressed in the Architecture Analysis and Design Language (AADL). AADL is provided with a formal semantics expressed in Maude. The Maude tool set allows simulation and verification of architectures. The first way to establish traces is to use architecture verification techniques. A given requirement is reformulated as a property in terms of the architecture. The architecture is executed and a state space is produced. This execution simulates the behavior of the system on the architectural level. The property derived from the requirement is checked by the Maude model checker. Traces are generated between the requirement and the architectural components used in the verification of the property. The second way to establish traces is to use the requirements relations together with the semantics of traces. Requirements relations are reflected in the connections among the traced architectural elements based on the semantics of traces. Therefore, new traces are inferred from existing traces by using requirements relations. We use semantics of requirements relations and traces to both generate/validate traces and generate/validate requirements relations. There is a tool support for our approach. The tool provides the following: (1) generation/validation of traces by using requirements relations and/or verification of architecture, (2) generation/validation of requirements relations by using traces. A change impact analysis technique for software architecture using architecture verification and semantics of traces between R&A. The software architect needs to identify the impacted architectural elements after requirements change. We present a change impact analysis technique for software architecture using architecture verification and semantics of traces. The technique is semi-automatic and requires participation of the software architect. Our technique has two parts. The first part is to identify the architectural elements that implement the system properties to which proposed requirements changes are introduced. By having the formal semantics of requirements relations and traces, we identify which parts of software architecture are impacted by a proposed change in requirements. We have extended TRIC for determining candidate impacted architectural elements. The second part of our technique is to propose possible changes for software architecture when the software architecture does not satisfy the new and/or changed requirements. The technique is based on architecture verification. The output of verification is a counter example if the requirements are not satisfied. The counter example is used with a classification of architectural changes in order to propose changes in the software architecture. These changes produce a new version of the architecture that possibly satisfies the new or the changed requirements

    Developing a distributed electronic health-record store for India

    Get PDF
    The DIGHT project is addressing the problem of building a scalable and highly available information store for the Electronic Health Records (EHRs) of the over one billion citizens of India

    Debugging Techniques for Locating Defects in Software Architectures

    Get PDF
    The explicit design of the architecture for a software product is a well established part of development projects. As the software architecture descriptions are becoming larger and more complex, there is more likelihood of defects being present in the software architecture. Studies have shown that a defect in the software architecture that has propagated to the development phase is very expensive to fix. To prevent such propagation of defects, this research proposes to provide debugging support for software architecture design. Debugging is commonly used in programming languages to effectively find the cause of a failure and locate the error to provide a fix. The same should be accomplished in software architectures to debug architecture failures. Without debugging support, the software architect is unable to quickly locate and determine the source of an error. In our work, we define a process for debugging software architecture and provide analysis techniques to locate defects in a software architecture that fails to meet functional and non-functional requirements. We have implemented the techniques and provide an evaluation of the techniques based on examples using an industry standard architecture definition language, Architecture Analysis and Design Language (AADL)

    Clafer: Lightweight Modeling of Structure, Behaviour, and Variability

    Get PDF
    Embedded software is growing fast in size and complexity, leading to intimate mixture of complex architectures and complex control. Consequently, software specification requires modeling both structures and behaviour of systems. Unfortunately, existing languages do not integrate these aspects well, usually prioritizing one of them. It is common to develop a separate language for each of these facets. In this paper, we contribute Clafer: a small language that attempts to tackle this challenge. It combines rich structural modeling with state of the art behavioural formalisms. We are not aware of any other modeling language that seamlessly combines these facets common to system and software modeling. We show how Clafer, in a single unified syntax and semantics, allows capturing feature models (variability), component models, discrete control models (automata) and variability encompassing all these aspects. The language is built on top of first order logic with quantifiers over basic entities (for modeling structures) combined with linear temporal logic (for modeling behaviour). On top of this semantic foundation we build a simple but expressive syntax, enriched with carefully selected syntactic expansions that cover hierarchical modeling, associations, automata, scenarios, and Dwyer's property patterns. We evaluate Clafer using a power window case study, and comparing it against other notations that substantially overlap with its scope (SysML, AADL, Temporal OCL and Live Sequence Charts), discussing benefits and perils of using a single notation for the purpose

    A Decision Support System for Selecting Between Designs for Dynamic Software Product Lines

    Get PDF
    When commissioning a system, a myriad of potential designs can successfully fulfill the system\u27s goals. Deciding among the candidate designs requires an understanding of how the design affects the system\u27s quality attributes and how much effort is needed to realize the design. The difficulty of the process compounds if the system to be designed includes dynamic run-time self- adaptivity, the ability for the system to self-modify its architecture at run-time in response to either external or internal stimuli, as the type and location of the dynamic self-adaptivity within the architecture must be co-decided. In this proposal, we introduce a Decision Support System, which contains a new Dynamic Software Product Line-centric cost / effort estimation technique, the Structured Intuitive Model for Dynamic Adaptive System Economics (SIMDASE), that will allow system designers / architects to select the most appropriate design for systems where the candidates can be structured as a Dynamic Software Product Line. We will focus on using the Decision Support System to select designs for a system where at least one component of the system is a low-level embedded system for use within the Internet of Things (IoT), particularly embedded systems whose purpose is to exist as things (either intelligent sensors or actuators). The Decision Support System we introduce is a multi-step process that begins with a high- level system architecture generated from the system requirements and goals. Candidate designs that can fulfill all goals / requirements of the high-level architecture are selected. Each design is then annotated using SIMDASE so that the effort, risk, cost and return on investment that can be expected from the realization of the design(s) can be compared in order to select the best design for a given organization

    Model-based compositional verification approaches and tools development for cyber-physical systems

    Get PDF
    The model-based design for embedded real-time systems utilizes the veriable reusable components and proper architectures, to deal with the verification scalability problem caused by state-explosion. In this thesis, we address verification approaches for both low-level individual component correctness and high-level system correctness, which are equally important under this scheme. Three prototype tools are developed, implementing our approaches and algorithms accordingly. For the component-level design-time verification, we developed a symbolic verifier, LhaVrf, for the reachability verification of concurrent linear hybrid systems (LHA). It is unique in translating a hybrid automaton into a transition system that preserves the discrete transition structure, possesses no continuous dynamics, and preserves reachability of discrete states. Afterward, model-checking is interleaved in the counterexample fragment based specification relaxation framework. We next present a simulation-based bounded-horizon reachability analysis approach for the reachability verification of systems modeled by hybrid automata (HA) on a run-time basis. This framework applies a dynamic, on-the-fly, repartition-based error propagation control method with the mild requirement of Lipschitz continuity on the continuous dynamics. The novel features allow state-triggered discrete jumps and provide eventually constant over-approximation error bound for incremental stable dynamics. The above approaches are implemented in our prototype verifier called HS3V. Once the component properties are established, the next thing is to establish the system-level properties through compositional verication. We present our work on the role and integration of quantier elimination (QE) for property composition and verication. In our approach, we derive in a single step, the strongest system property from the given component properties for both time-independent and time-dependent scenarios. The system initial condition can also be composed, which, alongside the strongest system property, are used to verify a postulated system property through induction. The above approaches are implemented in our prototype tool called ReLIC
    • …
    corecore