282 research outputs found

    A system-theoretic assurance framework for safety-driven systems engineering

    Get PDF
    The complexity of safety-critical systems is continuously increasing. To create safe systems despite the complexity, the system development requires a strong integration of system design and safety activities. A promising choice for integrating system design and safety activities are model-based approaches. They can help to handle complexity through abstraction, automation, and reuse and are applied to design, analyze, and assure systems. In practice, however, there is often a disconnect between the model-based design and safety activities. At the same time, there is often a delay until recent approaches are available in model-based frameworks. As a result, the advantages of the models are often not fully utilized. Therefore, this article proposes a framework that integrates recent approaches for system design (model-based systems engineering), safety analysis (system-theoretic process analysis), and safety assurance (goal structuring notation). The framework is implemented in the systems modeling language (SysML), and the focus is placed on the connection between the safety analysis and safety assurance activities. It is shown how the model-based integration enables tool assistance for the systematic creation, analysis, and maintenance of safety artifacts. The framework is demonstrated with the system design, safety analysis, and safety assurance of a collision avoidance system for aircraft. The model-based nature of the design and safety activities is utilized to support the systematic generation, analysis, and maintenance of safety artifacts

    From Operational Design Domain to Runtime Monitoring of AI-BaseAviation Systems

    Get PDF
    For the integration of autonomy and machine learning in the next generation of systems for urban air mobility and unmanned aircraft, it needs to be shown that these functions can be integrated safely. Moreover, it must be shown that these systems can operate safely with these active machine-learning functions. One prerequisite is that the machine-learning function is only utilized when it is expected to operate safely within the specified environmental conditions. This paper shows a model-based approach for the definition of the Operational Design Domain (ODD). The ODD enables the formalisation of the expected environmental conditions during the operation and the system states. With the formal model, the ODD specification can then be transformed into a specification of a runtime monitoring language called RTLola. This enables the utilization of the RTLola runtime monitoring framework to check log files against violations of the ODD and, later, supervise the ODD during operation and in flight. The goal is to automate the supervision and make it more user-friendly. This is shown in two separate use cases, where an ODD model is created and then exported and transformed into a monitoring specification. The approach is validated with a set of log files from the use cases
    corecore