11 research outputs found

    Towards Identifying and closing Gaps in Assurance of autonomous Road vehicleS - a collection of Technical Notes Part 1

    Get PDF
    This report provides an introduction and overview of the Technical Topic Notes (TTNs) produced in the Towards Identifying and closing Gaps in Assurance of autonomous Road vehicleS (Tigars) project. These notes aim to support the development and evaluation of autonomous vehicles. Part 1 addresses: Assurance-overview and issues, Resilience and Safety Requirements, Open Systems Perspective and Formal Verification and Static Analysis of ML Systems. Part 2: Simulation and Dynamic Testing, Defence in Depth and Diversity, Security-Informed Safety Analysis, Standards and Guidelines

    Cyber LOPA: An Integrated Approach for the Design of Dependable and Secure Cyber Physical Systems

    Full text link
    Safety risk assessment is an essential process to ensure a dependable Cyber-Physical System (CPS) design. Traditional risk assessment considers only physical failures. For modern CPS, failures caused by cyber attacks are on the rise. The focus of latest research effort is on safety-security lifecycle integration and the expansion of modeling formalism for risk assessment to incorporate security failures. The interaction between safety and security and its impact on the overall system design, as well as the reliability loss resulting from ignoring security failures are some of the overlooked research questions. This paper addresses these research questions by presenting a new safety design method named Cyber Layer Of Protection Analysis (CLOPA) that extends existing LOPA framework to include failures caused by cyber attacks. The proposed method provides a rigorous mathematical formulation that expresses quantitatively the trade-off between designing a highly-reliable versus a highly-secure CPS. We further propose a co-design lifecycle process that integrates the safety and security risk assessment processes. We evaluate the proposed CLOPA approach and the integrated lifecycle on a practical case study of a process reactor controlled by an industrial control testbed, and provide a comparison between the proposed CLOPA and current LOPA risk assessment practice.Comment: Main Content: Title adjusted, Related work moved to end, added references, Sec IV (prev. sec V): expanded discussion, design and Alg. 1 updated | Sec V (prev. sec VI): Expanded discussion, Table V Expanded. Editorial: Fig 1 redrawn horiz., Eq (4)(5) math notation changed, same content. Eq (25) expanded, Page-wide eq. not ref as fig (shift by 1 of fig num), Fig 4 iterative design values show

    Assuring Safety and Security

    Get PDF
    Large technological systems produce new capabilities that allow innovative solutions to social, engineering and environmental problems. This trend is especially important in the safety-critical systems (SCS) domain where we simultaneously aim to do more with the systems whilst reducing the harm they might cause. Even with the increased uncertainty created by these opportunities, SCS still need to be assured against safety and security risk and, in many cases, certified before use. A large number of approaches and standards have emerged, however there remain challenges related to technical risk such as identifying inter-domain risk interactions, developing safety-security causal models, and understanding the impact of new risk information. In addition, there are socio-technical challenges that undermine technical risk activities and act as a barrier to co-assurance, these include insufficient processes for risk acceptance, unclear responsibilities, and a lack of legal, regulatory and organisational structure to support safety-security alignment. A new approach is required. The Safety-Security Assurance Framework (SSAF) is proposed here as a candidate solution. SSAF is based on the new paradigm of independent co-assurance, that is, keeping the disciplines separate but having synchronisation points where required information is exchanged. SSAF is comprised of three parts - the Conceptual Model defines the underlying philosophy, and the Technical Risk Model (TRM) and Socio-Technical Model (STM) consist of processes and models for technical risk and socio-technical aspects of co-assurance. Findings from a partial evaluation of SSAF using case studies reveal that the approach has some utility in creating inter-domain relationship models and identifying socio-technical gaps for co-assurance. The original contribution to knowledge presented in this thesis is the novel approach to co-assurance that uses synchronisation points, explicit representation of a technical risk argument that argues over interaction risks, and a confidence argument that explicitly considers co-assurance socio-technical factors

    Integrating Safety, Security and Human Factors Engineering in Rail Infrastructure Design and Evaluation.

    Get PDF
    With the new emerging dependency towards the rail industry, there have been growing concerns on how to make this critical infrastructure more adaptable in this technological era of cyber attacks. Currently, the rail infrastructure is built around safety and human factors, but one important factor which has less attention is cyber security. In order to satisfy the security needs of rail stakeholders, there is a need to put together knowledge in the form of design framework by combining safety and human factors, with cyber security. The research problem this PhD thesis addresses is how the process-techniques and tool- support available in safety, security and human factors engineering can be integrated to provide design solutions in rail infrastructure. This PhD thesis claims that proposed design framework is an exemplar by making three significant contributions. Firstly, it identifies the integration of concepts between safety, security and human factors engineering. Secondly, based on integration it pro- vides an integrated design framework where Integrating Requirements and Information Security (IRIS), use-case specifications informed Task Analysis (TA) using Cognitive Task Analysis (CTA) and Hierarchical Task Analysis (HTA), Human Factors Analysis and Clas- sification System (HFACS) frameworks are used to inform Systems-Theoretic Process Analysis (STPA). This integrated design framework is tool-supported using the open- source Computer Aided Integrating Requirements and Information Security (CAIRIS) platform. Thirdly, the proposed design framework in the form of process-techniques and tool-support is implemented by rail infrastructure to determine the safe, secure and us- able design solutions. This PhD thesis is validated by applying the design framework to three case studies. In the first, preliminary evaluation is carried out by applying it to a case study of ‘Polish Tram Incident’, where inter-dependencies between safety, security, and human factors engineering are present. In the second, the results are used to inform TA using use-case specifications format by prototyping the role of European Railway Traffic Management System (ERTMS) - Signaller, which provides human factors experts a chance to work in collaboration with safety and security design experts. In the final case study, with the support of representative rail stakeholders from Ricardo Rail is used to implement STPA on case study of ’Cambrian Railway Incident’

    Conceptualising automated driving shared control hazard causes

    Get PDF
    The motivation for this research was the realisation that the introduction of greater vehicle automation would not only change the task of driving but would also potentially change how vehicles are developed and safety is assured. Undertaking a practice-based workshop identified many Automated Driving (AD) safety assurance challenges having different levels of human-machine control. These challenges include an increase in the size and complexity of AD safety analyses, a need to re-examine the notion of controllability in the context of shared control, and the need to conceptualise the vehicle as a system of systems. To begin addressing these challenges and to answer the research question “how can the safety of AD be assured under different levels of shared control?” this research has created three products: a vehicle model and behavioural competency taxonomy that allows AD shared control to be conceptualised, a concrete hazard analysis method for analysing AD shared control hazard causes, and a safety case argument pattern for that. A series of case studies evaluate the research products described above. These cases have used contemporary AD vehicle features, having varying levels of automation. The evaluation of driver assistance, partial and conditional automation cases have been completed by the author. Complementing these is the analysis of a highly automated vehicle system, which has been undertaken with the engineering team from Oxbotica. Considered together these case studies establish the research products as a proof-of-concept hazard analysis method for AD shared control. Further evaluation work is needed to test the viability of the method as an engineering tool for use by automotive practitioners working in a product development environment

    Considerations in Assuring Safety of Increasingly Autonomous Systems

    Get PDF
    Recent technological advances have accelerated the development and application of increasingly autonomous (IA) systems in civil and military aviation. IA systems can provide automation of complex mission tasks-ranging across reduced crew operations, air-traffic management, and unmanned, autonomous aircraft-with most applications calling for collaboration and teaming among humans and IA agents. IA systems are expected to provide benefits in terms of safety, reliability, efficiency, affordability, and previously unattainable mission capability. There is also a potential for improving safety by removal of human errors. There are, however, several challenges in the safety assurance of these systems due to the highly adaptive and non-deterministic behavior of these systems, and vulnerabilities due to potential divergence of airplane state awareness between the IA system and humans. These systems must deal with external sensors and actuators, and they must respond in time commensurate with the activities of the system in its environment. One of the main challenges is that safety assurance, currently relying upon authority transfer from an autonomous function to a human to mitigate safety concerns, will need to address their mitigation by automation in a collaborative dynamic context. These challenges have a fundamental, multidimensional impact on the safety assurance methods, system architecture, and V&V capabilities to be employed. The goal of this report is to identify relevant issues to be addressed in these areas, the potential gaps in the current safety assurance techniques, and critical questions that would need to be answered to assure safety of IA systems. We focus on a scenario of reduced crew operation when an IA system is employed which reduces, changes or eliminates a human's role in transition from two-pilot operations

    Modélisation conjointe de la sûreté et de la sécurité pour l’évaluation des risques dans les systèmes cyber-physiques

    Get PDF
    Cyber physical systems (CPS) denote systems that embed programmable components in order to control a physical process or infrastructure. CPS are henceforth widely used in different industries like energy, aeronautics, automotive, medical or chemical industry. Among the variety of existing CPS stand SCADA (Supervisory Control And Data Acquisition) systems that offer the necessary means to control and supervise critical infrastructures. Their failure or malfunction can engender adverse consequences on the system and its environment.SCADA systems used to be isolated and based on simple components and proprietary standards. They are nowadays increasingly integrating information and communication technologies (ICT) in order to facilitate supervision and control of the industrial process and to reduce exploitation costs. This trend induces more complexity in SCADA systems and exposes them to cyber-attacks that exploit vulnerabilities already existent in the ICT components. Such attacks can reach some critical components within the system and alter its functioning causing safety harms.We associate throughout this dissertation safety with accidental risks originating from the system and security with malicious risks with a focus on cyber-attacks. In this context of industrial systems supervised by new SCADA systems, safety and security requirements and risks converge and can have mutual interactions. A joint risk analysis covering both safety and security aspects would be necessary to identify these interactions and optimize the risk management.In this thesis, we give first a comprehensive survey of existing approaches considering both safety and security issues for industrial systems, and highlight their shortcomings according to the four following criteria that we believe essential for a good model-based approach: formal, automatic, qualitative and quantitative and robust (i.e. easily integrates changes on system into the model).Next, we propose a new model-based approach for a safety and security joint risk analysis: S-cube (SCADA Safety and Security modeling), that satisfies all the above criteria. The S-cube approach enables to formally model CPS and yields the associated qualitative and quantitative risk analysis. Thanks to graphical modeling, S-cube enables to input the system architecture and to easily consider different hypothesis about it. It enables next to automatically generate safety and security risk scenarios likely to happen on this architecture and that lead to a given undesirable event, with an estimation of their probabilities.The S-cube approach is based on a knowledge base that describes the typical components of industrial architectures encompassing information, process control and instrumentation levels. This knowledge base has been built upon a taxonomy of attacks and failure modes and a hierarchical top-down reasoning mechanism. It has been implemented using the Figaro modeling language and the associated tools. In order to build the model of a system, the user only has to describe graphically the physical and functional (in terms of software and data flows) architectures of the system. The association of the knowledge base and the system architecture produces a dynamic state based model: a Continuous Time Markov Chain. Because of the combinatorial explosion of the states, this CTMC cannot be exhaustively built, but it can be explored in two ways: by a search of sequences leading to an undesirable event, or by Monte Carlo simulation. This yields both qualitative and quantitative results.We finally illustrate the S-cube approach on a realistic case study: a pumped storage hydroelectric plant, in order to show its ability to yield a holistic analysis encompassing safety and security risks on such a system. We investigate the results obtained in order to identify potential safety and security interactions and give recommendations.Les Systèmes Cyber Physiques (CPS) intègrent des composants programmables afin de contrôler un processus physique. Ils sont désormais largement répandus dans différentes industries comme l’énergie, l’aéronautique, l’automobile ou l’industrie chimique. Parmi les différents CPS existants, les systèmes SCADA (Supervisory Control And Data Acquisition) permettent le contrôle et la supervision des installations industrielles critiques. Leur dysfonctionnement peut engendrer des impacts néfastes sur l’installation et son environnement.Les systèmes SCADA ont d’abord été isolés et basés sur des composants et standards propriétaires. Afin de faciliter la supervision du processus industriel et réduire les coûts, ils intègrent de plus en plus les technologies de communication et de l’information (TIC). Ceci les rend plus complexes et les expose à des cyber-attaques qui exploitent les vulnérabilités existantes des TIC. Ces attaques peuvent modifier le fonctionnement du système et nuire à sa sûreté.On associe dans la suite la sûreté aux risques de nature accidentelle provenant du système, et la sécurité aux risques d’origine malveillante et en particulier les cyber-attaques. Dans ce contexte où les infrastructures industrielles sont contrôlées par les nouveaux systèmes SCADA, les risques et les exigences liés à la sûreté et à la sécurité convergent et peuvent avoir des interactions mutuelles. Une analyse de risque qui couvre à la fois la sûreté et la sécurité est indispensable pour l’identification de ces interactions ce qui conditionne l’optimalité de la gestion de risque.Dans cette thèse, on donne d’abord un état de l’art complet des approches qui traitent la sûreté et la sécurité des systèmes industriels et on souligne leur carences par rapport aux quatre critères suivants qu’on juge nécessaires pour une bonne approche basée sur les modèles : formelle, automatique, qualitative et quantitative, et robuste (i.e. intègre facilement dans le modèle des variations d’hypothèses sur le système).On propose ensuite une nouvelle approche orientée modèle d’analyse conjointe de la sûreté et de la sécurité : S-cube (SCADA Safety and Security modeling), qui satisfait les critères ci-dessus. Elle permet une modélisation formelle des CPS et génère l’analyse de risque qualitative et quantitative associée. Grâce à une modélisation graphique de l’architecture du système, S-cube permet de prendre en compte différentes hypothèses et de générer automatiquement les scenarios de risque liés à la sûreté et à la sécurité qui amènent à un évènement indésirable donné, avec une estimation de leurs probabilités.L’approche S-cube est basée sur une base de connaissance (BDC) qui décrit les composants typiques des architectures industrielles incluant les systèmes d’information, le contrôle et la supervision, et l’instrumentation. Cette BDC a été conçue sur la base d’une taxonomie d’attaques et modes de défaillances et un mécanisme de raisonnement hiérarchique. Elle a été mise en œuvre à l’aide du langage de modélisation Figaro et ses outils associés. Afin de construire le modèle du système, l’utilisateur saisit graphiquement l’architecture physique et fonctionnelle (logiciels et flux de données) du système. L’association entre la BDC et ce modèle produit un modèle d’états dynamiques : une chaîne de Markov à temps continu. Pour limiter l’explosion combinatoire, cette chaîne n’est pas construite mais peut être explorée de deux façons : recherche de séquences amenant à un évènement indésirable ou simulation de Monte Carlo, ce qui génère des résultats qualitatifs et quantitatifs.On illustre enfin l’approche S-cube sur un cas d’étude réaliste : un système de stockage d’énergie par pompage, et on montre sa capacité à générer une analyse holistique couvrant les risques liés à la sûreté et à la sécurité. Les résultats sont ensuite analysés afin d’identifier les interactions potentielles entre sûreté et sécurité et de donner des recommandations

    Hazard Relation Diagramme - Definition und Evaluation

    Get PDF
    Der Entwicklungsprozess sicherheitskritischer, software-intensiver eingebetteter Systeme wird im Besonderen durch die Notwendigkeit charakterisiert, zu einem frühestmöglichem Zeitpunkt im Rahmen des Safety Assessments sogenannte Hazards aufzudecken, welche im Betrieb zu Schaden in Form von Tod oder Verletzung von Menschen sowie zu Beschädigung oder Zerstörung externer Systeme führen können. Um die Sicherheit des Systems im Betrieb zu fördern, werden für jeden Hazard sogenannte Mitigationen entwickelt, welche durch hazard-mitigierende Anforderungen im Rahmen des Requirements Engineering dokumentiert werden. Hazard-mitigierende Anforderungen müssen in dem Sinne adäquat sein, dass sie zum einen die von Stakeholdern gewünschte Systemfunktionalität spezifizieren und zum anderen die Wahrscheinlichkeit von Schaden durch Hazards im Betrieb minimieren. Die Adäquatheit von hazard-mitigierenden Anforderungen wird im Entwicklungsprozess im Rahmen der Anforderungsvalidierung bestimmt. Die Validierung von hazard-mitigierenden Anforderungen wird allerdings dadurch erschwert, dass Hazards sowie Kontextinformationen über Hazards ein Arbeitsprodukt des Safety Assessments darstellen und die hazard-mitigierenden Anforderungen ein Arbeitsprodukt des Requirements Engineering sind. Diese beiden Arbeitsprodukte sind in der Regel nicht schlecht integriert, sodass den Stakeholdern bei der Validierung nicht alle Informationen zur Verfügung stehen, die zur Bestimmung der Adäquatheit der hazard-mitigierenden Anforderungen notwendig sind. In Folge könnte es dazu kommen, dass Inadäquatheit in hazard-mitigierenden Anforderungen nicht aufgedeckt wird und das System fälschlicherweise als ausreichend sicher betrachtet wird. Im Rahmen dieses Dissertationsvorhabens wurde ein Ansatz entwickelt, welcher Hazards, Kontextinformationen zu Hazards, hazard-mitigierende Anforderungen sowie die spezifischen Abhängigkeiten in einem graphischen Modell visualisiert und somit für die Validierung zugänglich macht. Zudem wird ein automatisierter Ansatz zur Generierung der graphischen Modelle vorgestellt und prototypisch implementiert. Darüber hinaus wird anhand von vier detaillierten empirischen Experimenten der Nutzen der graphischen Modelle für die Validierung hazard-mitigierender Anforderungen nachgewiesen. Die vorliegende Arbeit leistet somit einen Beitrag zur Integration der Arbeitsergebnisse des Safety Assessments und des Requirements Engineerings mit dem Ziel die Validierung der Adäquatheit hazard-mitigierender Anforderungen zu unterstützen.The development process of safety-critical, software-intensive embedded systems is characterized by the need to identify hazards during safety assessment in early stages of development. During operation, such hazards may lead to harm to come to humans and external systems in the form of death, injury, damage, or destruction, respectively. In order to improve the safety of the system during operation, mitigations are conceived for each hazard, and documented during requirements engineering by means of hazard-mitigating requirements. These hazard-mitigating requirements must be adequate in the sense that they must specify the functionality required by the stakeholders and must render the system sufficiently safe during operation with regard to the identified hazards. The adequacy of hazard-mitigating requirements is determined during requirements validation. Yet, the validation of the adequacy of hazard-mitigating requirements is burdened by the fact that hazards and contextual information about hazards are a work product of safety assessment and hazard-mitigating requirements are a work product of requirements engineering. These work products are poorly integrated such that the information needed to determine the adequacy of hazard-mitigating requirements are not available to stakeholders during validation. In consequence, there is the risk that inadequate hazard-mitigating requirements remain covert and the system is falsely considered sufficiently safe. In this dissertation, an approach was developed, which visualizes hazards, contextual information about hazards, hazard-mitigating requirements, as well as their specific dependencies in graphical models. The approach hence renders these information accessible to stakeholders during validation. In addition, an approach to create these graphical models was developed and prototypically implemented. Moreover, the benefits of using these graphical models during validation of hazard-mitigating requirements was investigated and established by means of four detailed empirical experiments. The dissertation at hand hence provides a contribution towards the integration of the work products of safety assessment and requirements engineering with the purpose to support the validation of the adequacy of hazard-mitigating requirements

    Supporting model based safety and security assessment of high assurance systems

    Get PDF
    Doctor of PhilosophyDepartment of Computer ScienceJohn M HatcliffModern embedded systems are more complex than ever due to intricate interaction with the physical world in a system environment and sophisticated software in a resource-constrained context. Cyber attacks in software-reliant and networked safety-critical systems lead to consideration of security aspects from the system’s inception. Model-Based Development (MBD) is one approach that has been an effective development practice because of the abstraction mechanism that hides the complicated lower-level details of software and hardware components. Standards play an essential role in embedded development to ensure the safety of the users and environment. In safety-critical domains like avionics, automotive, and medical devices, standards provide best practices and consistent approaches across the community. The Analysis and Design Language (AADL) is a standardized modeling language that includes patterns that reflect best architectural practices inspired by multiple safety-critical domains. The work described in this dissertation comprises numerous contributions that support a model analysis framework for AADL that aims to help developers design and assure safety and security requirements and demonstrate system conformance to specific categories of standards. This first contribution is Awas - an open-source framework for performing reachability analysis on AADL models annotated with information flow annotations at varying degrees of detail. The framework provides highly scalable interactive visualizations of flows with dynamic querying capabilities. Awas provide a simple domain-specific language to ease posing various queries to check information flow properties in the model. The second contribution is a process for integrating risk management tasks of ISO 14971 - the primary risk management standard in the medical device domain — with AADL modeling, specifically with AADL’s error modeling (EM) of fault and error propagations. This work uses an open-source patient-controlled analgesic (PCA) pump - the largest open-source AADL model to illustrate the integration of risk management process with AADL and provides the first mapping of AADL EM to ISO 14971 concepts. It also provides industry engineers, academic researchers, and regulators with a complex example that can be used to investigate methodologies and methods of integrating MBD and risk management. The third contribution is a technique to model and analyze security properties such as confidentiality, authentication, and resource partitioning within AADL models. This effort comprises an AADL annex language to model multi-level security domains along with classification of system elements and data using those domains and a tool to infer security levels and check information leaks. The annex language and the tools are evaluated and integrated into the AADL development environment for a seamless workflow
    corecore