11 research outputs found

    ADGS-2100 Adaptive Display and Guidance System Window Manager Analysis

    Get PDF
    Recent advances in modeling languages have made it feasible to formally specify and analyze the behavior of large system components. Synchronous data flow languages, such as Lustre, SCR, and RSML-e are particularly well suited to this task, and commercial versions of these tools such as SCADE and Simulink are growing in popularity among designers of safety critical systems, largely due to their ability to automatically generate code from the models. At the same time, advances in formal analysis tools have made it practical to formally verify important properties of these models to ensure that design defects are identified and corrected early in the lifecycle. This report describes how these tools have been applied to the ADGS-2100 Adaptive Display and Guidance Window Manager being developed by Rockwell Collins Inc. This work demonstrates how formal methods can be easily and cost-efficiently used to remove defects early in the design cycle

    Integration of safety risk assessment techniques into requirement elicitation

    Get PDF
    Incomplete and incorrect requirements may cause the safety-related software systems to fail to achieve their safety goals. It is crucial to ensure software safety by identifying proper software safety requirements during the requirements elicitation activity. Practitioners apply various Safety Risk Assessment Techniques (SRATs) to identify, analyze and assess safety risk.Nevertheless, there is a lack of guidance on how appropriate SRATs and safety process can be integrated into requirements elicitation activity to bridge the gap between the safety and requirements engineering practices. In this research, we proposed an Integration Framework that integrates safety activities and techniques into existing requirements elicitation activity

    A Methodology for the Design and Verification of Globally Asynchronous/Locally Synchronous Architectures

    Get PDF
    Recent advanced in model-checking have made it practical to formally verify the correctness of many complex synchronous systems (i.e., systems driven by a single clock). However, many computer systems are implemented by asynchronously composing several synchronous components, where each component has its own clock and these clocks are not synchronized. Formal verification of such Globally Asynchronous/Locally Synchronous (GA/LS) architectures is a much more difficult task. In this report, we describe a methodology for developing and reasoning about such systems. This approach allows a developer to start from an ideal system specification and refine it along two axes. Along one axis, the system can be refined one component at a time towards an implementation. Along the other axis, the behavior of the system can be relaxed to produce a more cost effective but still acceptable solution. We illustrate this process by applying it to the synchronization logic of a Dual Fight Guidance System, evolving the system from an ideal case in which the components do not fail and communicate synchronously to one in which the components can fail and communicate asynchronously. For each step, we show how the system requirements have to change if the system is to be implemented and prove that each implementation meets the revised system requirements through modelchecking

    Improving knowledge about the risks of inappropriate uses of geospatial data by introducing a collaborative approach in the design of geospatial databases

    Get PDF
    La disponibilité accrue de l’information géospatiale est, de nos jours, une réalité que plusieurs organisations, et même le grand public, tentent de rentabiliser; la possibilité de réutilisation des jeux de données est désormais une alternative envisageable par les organisations compte tenu des économies de coûts qui en résulteraient. La qualité de données de ces jeux de données peut être variable et discutable selon le contexte d’utilisation. L’enjeu d’inadéquation à l’utilisation de ces données devient d’autant plus important lorsqu’il y a disparité entre les nombreuses expertises des utilisateurs finaux de la donnée géospatiale. La gestion des risques d’usages inappropriés de l’information géospatiale a fait l’objet de plusieurs recherches au cours des quinze dernières années. Dans ce contexte, plusieurs approches ont été proposées pour traiter ces risques : parmi ces approches, certaines sont préventives et d’autres sont plutôt palliatives et gèrent le risque après l'occurrence de ses conséquences; néanmoins, ces approches sont souvent basées sur des initiatives ad-hoc non systémiques. Ainsi, pendant le processus de conception de la base de données géospatiale, l’analyse de risque n’est pas toujours effectuée conformément aux principes d’ingénierie des exigences (Requirements Engineering) ni aux orientations et recommandations des normes et standards ISO. Dans cette thèse, nous émettons l'hypothèse qu’il est possible de définir une nouvelle approche préventive pour l’identification et l’analyse des risques liés à des usages inappropriés de la donnée géospatiale. Nous pensons que l’expertise et la connaissance détenues par les experts (i.e. experts en geoTI), ainsi que par les utilisateurs professionnels de la donnée géospatiale dans le cadre institutionnel de leurs fonctions (i.e. experts du domaine d'application), constituent un élément clé dans l’évaluation des risques liés aux usages inadéquats de ladite donnée, d’où l’importance d’enrichir cette connaissance. Ainsi, nous passons en revue le processus de conception des bases de données géospatiales et proposons une approche collaborative d’analyse des exigences axée sur l’utilisateur. Dans le cadre de cette approche, l’utilisateur expert et professionnel est impliqué dans un processus collaboratif favorisant l’identification a priori des cas d’usages inappropriés. Ensuite, en passant en revue la recherche en analyse de risques, nous proposons une intégration systémique du processus d’analyse de risque au processus de la conception de bases de données géospatiales et ce, via la technique Delphi. Finalement, toujours dans le cadre d’une approche collaborative, un référentiel ontologique de risque est proposé pour enrichir les connaissances sur les risques et pour diffuser cette connaissance aux concepteurs et utilisateurs finaux. L’approche est implantée sous une plateforme web pour mettre en œuvre les concepts et montrer sa faisabilité.Nowadays, the increased availability of geospatial information is a reality that many organizations, and even the general public, are trying to transform to a financial benefit. The reusability of datasets is now a viable alternative that may help organizations to achieve cost savings. The quality of these datasets may vary depending on the usage context. The issue of geospatial data misuse becomes even more important because of the disparity between the different expertises of the geospatial data end-users. Managing the risks of geospatial data misuse has been the subject of several studies over the past fifteen years. In this context, several approaches have been proposed to address these risks, namely preventive approaches and palliative approaches. However, these approaches are often based on ad-hoc initiatives. Thus, during the design process of the geospatial database, risk analysis is not always carried out in accordance neither with the principles/guidelines of requirements engineering nor with the recommendations of ISO standards. In this thesis, we suppose that it is possible to define a preventive approach for the identification and analysis of risks associated to inappropriate use of geospatial data. We believe that the expertise and knowledge held by experts and users of geospatial data are key elements for the assessment of risks of geospatial data misuse of this data. Hence, it becomes important to enrich that knowledge. Thus, we review the geospatial data design process and propose a collaborative and user-centric approach for requirements analysis. Under this approach, the user is involved in a collaborative process that helps provide an a priori identification of inappropriate use of the underlying data. Then, by reviewing research in the domain of risk analysis, we propose to systematically integrate risk analysis – using the Delphi technique – through the design of geospatial databases. Finally, still in the context of a collaborative approach, an ontological risk repository is proposed to enrich the knowledge about the risks of data misuse and to disseminate this knowledge to the design team, developers and end-users. The approach is then implemented using a web platform in order to demonstrate its feasibility and to get the concepts working within a concrete prototype

    Testing Strategies for Model-Based Development

    Get PDF
    This report presents an approach for testing artifacts generated in a model-based development process. This approach divides the traditional testing process into two parts: requirements-based testing (validation testing) which determines whether the model implements the high-level requirements and model-based testing (conformance testing) which determines whether the code generated from a model is behaviorally equivalent to the model. The goals of the two processes differ significantly and this report explores suitable testing metrics and automation strategies for each. To support requirements-based testing, we define novel objective requirements coverage metrics similar to existing specification and code coverage metrics. For model-based testing, we briefly describe automation strategies and examine the fault-finding capability of different structural coverage metrics using tests automatically generated from the model

    Formal mission specification and execution mechanisms for unmanned aircraft systems

    Get PDF
    Unmanned Aircraft Systems (UAS) are rapidly gaining attention due to the increasing potential of their applications in the civil domain. UAS can provide great value performing environmental applications, during emergency situations, as monitoring and surveillance tools, and operating as communication relays among other uses. In general, they are specially well suited for the so-called D-cube operations (Dirty, Dull or Dangerous).Most current commercial solutions, if not remotely piloted, rely on waypoint based flight control systems for their navigation and are unable to coordinate UAS flight with payload operation. Therefore, automation capabilities and the ability for the system to operate in an autonomous manner are very limited. Some motivators that turn autonomy into an important requirement include limited bandwidth, limits on long-term attention spans of human operators, faster access to sensed data, which also results in better reaction times, as well as benefits derived from reducing operators workload and training requirements.Other important requirements we believe are key to the success of UAS in the civil domain are reconfigurability and cost-effectiveness. As a result, an affordable platform should be able to operate in different application scenarios with reduced human intervention.To increase capabilities of UAS and satisfy the aforementioned requirements, we propose adding flight plan and mission management layers on top of a commercial off-the-shelf flight control system. By doing so, a high level of autonomy can be achieved while taking advantage of available technologies and avoiding huge investments. Reconfiguration is made possible by separating flight and mission execution from its specification.The flight and mission management components presented in this thesis integrate into a wider hardware/software architecture being developed by the ICARUS research group.This architecture follows a service oriented approach where UAS subsystems are connected together through a common networking infrastructure. Components can be added and removed from the network in order to adapt the system to the target mission.The first contribution of this thesis consists, then, in a flight specification language that enables the description of the flight plan in terms of legs. Legs provide a higher level of abstraction compared to plain waypoints since they not only specify a destination but also the trajectory that should be followed to reach it. This leg concept is extended with additional constructs that enable specification of alternative routes, repetition and generation of complex trajectories from a reduced number of parameters.A Flight Plan Manager (FPM) service has been developed that is responsible for the execution of the flight plan. Since the underlying flight control system is still waypoint based, additional intermediate waypoints are automatically generated to adjust the flight to the desired trajectory.In order to coordinate UAS flight and payload operation a Mission Manager (MMa) service has also been developed. The MMa is able to adapt payload operation according to the current flight phase, but it can also act on the FPM and make modifications on the flight plan for a better adaption to the mission needs. To specify UAS behavior, instead of designing a new language, we propose using an in-development standard for the specification of state machines called State Chart XML.Finally, validation of the proposed specification and execution elements is carried out with two example missions executed in a simulation environment. The first mission mimics the procedures required for inspecting navigation aids and shows the UAS performance in a complex flight scenario. In this mission only the FPM is involved. The second example combines operation of the FPM with the MMa. In this case the mission consists in the detection of hotspots on a given area after a hypothetical wildfire. This second simulation shows how the MMa is able to modify the flight plan in order to adapt the trajectory to the mission needs. In particular, an eight pattern is flown over each of the dynamically detected potential hot spots

    IEEE/NASA Workshop on Leveraging Applications of Formal Methods, Verification, and Validation

    Get PDF
    This volume contains the Preliminary Proceedings of the 2005 IEEE ISoLA Workshop on Leveraging Applications of Formal Methods, Verification, and Validation, with a special track on the theme of Formal Methods in Human and Robotic Space Exploration. The workshop was held on 23-24 September 2005 at the Loyola College Graduate Center, Columbia, MD, USA. The idea behind the Workshop arose from the experience and feedback of ISoLA 2004, the 1st International Symposium on Leveraging Applications of Formal Methods held in Paphos (Cyprus) last October-November. ISoLA 2004 served the need of providing a forum for developers, users, and researchers to discuss issues related to the adoption and use of rigorous tools and methods for the specification, analysis, verification, certification, construction, test, and maintenance of systems from the point of view of their different application domains

    Building dependability arguments for software intensive systems

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2009.Includes bibliographical references (p. 301-308).A method is introduced for structuring and guiding the development of end-to-end dependability arguments. The goal is to establish high-level requirements of complex software-intensive systems, especially properties that cross-cut normal functional decomposition. The resulting argument documents and validates the justification of system-level claims by tracing them down to component-level substantiation, such as automatic code analysis or cryptographic proofs. The method is evaluated on case studies drawn from the Burr Proton Therapy Center, operating at Massachusetts General Hospital, and on the Pret a Voter cryptographic voting system, developed at the University of Newcastle.by Robert Morrison Seater.Ph.D

    Applying patterns in embedded systems design for managing quality attributes and their trade-offs

    Get PDF
    Embedded systems comprise one of the most important types of software-intensive systems, as they are pervasive and used in daily life more than any other type, e.g., in cars or in electrical appliances. When these systems operate under hard constraints, the violation of which can lead to catastrophic events, the system is classified as a critical embedded system (CES). The quality attributes related to these hard constraints are named critical quality attributes (CQAs). For example, the performance of the software for cruise-control or self-driving in a car are critical as they can potentially relate to harming human lives. Despite the growing body of knowledge on engineering CESs, there is still a lack of approaches that can support its design, while managing CQAs and their trade-offs with noncritical ones (e.g., maintainability and reusability). To address this gap, the state-of-research and practice on designing CES and managing quality trade-offs were explored, approaches to improve its design identified, and the merit of these approaches empirically investigated. When designing software, one common approach is to organize its components according to well-known structures, named design patterns. However, these patterns may be avoided in some classes of systems such as CES, as they are sometimes associated with the detriment of CQAs. In short, the findings reported in the thesis suggest that, when applicable, design patterns can promote CQAs while supporting the management of trade-offs. The thesis also reports on a phenomena, namely pattern grime, and factors that can influence the extent of the observed benefits

    Specification-Based Prototyping for Embedded Systems

    No full text
    Abstract. Specification of software for safety critical, embedded computer systems has been widely addressed in literature. To achieve the high level of confidence in a specification’s correctness necessary in many applications, manual inspections, formal verification, and simulation must be used in concert. Researchers have successfully addressed issues in inspection and verification; however, results in the areas of execution and simulation of specifications have not made as large an impact as desired. In this paper we present an approach to specification-based prototyping which addresses this issue. It combines the advantages of rigorous formal specifications and rapid systems prototyping. The approach lets us refine a formal executable model of the system requirements to a detailed model of the software requirements. Throughout this refinement process, the specification is used as a prototype of the proposed software. Thus, we guarantee that the formal specification of the system is always consistent with the observed behavior of the prototype. The approach is supported with the Nimbus environment, a framework that allows the formal specification to execute while interacting with software models of its embedding environment or even the physical environment itself (hardware-in-the-loop simulation).
    corecore