1,780 research outputs found

    Safety Analysis Concept and Methodology for EDDI development (Initial Version)

    Get PDF
    Executive Summary:This deliverable describes the proposed safety analysis concept and accompanying methodology to be defined in the SESAME project. Three overarching challenges to the development of safe and secure multi-robot systems are identified — complexity, intelligence, and autonomy — and in each case, we review state-of-the-art techniques that can be used to address them and explain how we intend to integrate them as part of the key SESAME safety and security concept, the EDDI.The challenge of complexity is largely addressed by means of compositional model-based safety analysis techniques that can break down the complexity into more manageable parts. This applies both to scale — modelling systems hierarchically and embedding local failure logic at the component-level — and to tasks, where different safety-related tasks (including not just analysis but also requirements allocation and assurance case generation) can be handled by the same set of models. All of this can be combined with the existing DDI concept to create models — EDDIs — that store all of the necessary information to support a gamut of design-time safety processes.Against the challenge of intelligence, we propose a trio of techniques: SafeML and Uncertainty Wrappers for estimating the confidence of a given classification, which can be used as a form of reliability measure, and SMILE for explainability purposes. By enabling us to measure and explain the reliability of ML decision making, we can integrate ML behaviour as part of a wider system safety model, e.g. as one input into a fault tree or Bayesian network. In addition to providing valuable feedback during training, testing, and verification, this allows the EDDI to perform runtime safety monitoring of ML components.The EDDI itself is therefore our primary solution to the twin challenges of autonomy and openness. Using the ConSert approach as a foundation, EDDIs can be made to operate cooperatively as part of a distributed system, issuing and receiving guarantees on the basis of their internal executable safety models to collectively achieve tasks in a safe and secure manner. Finally, a simple methodology is defined to show how the relevant techniques can be applied as part of the EDDI concept throughout the safety development lifecycle

    Model-connected safety cases

    Get PDF
    Regulatory authorities require justification that safety-critical systems exhibit acceptable levels of safety. Safety cases are traditionally documents which allow the exchange of information between stakeholders and communicate the rationale of how safety is achieved via a clear, convincing and comprehensive argument and its supporting evidence. In the automotive and aviation industries, safety cases have a critical role in the certification process and their maintenance is required throughout a system’s lifecycle. Safety-case-based certification is typically handled manually and the increase in scale and complexity of modern systems renders it impractical and error prone.Several contemporary safety standards have adopted a safety-related framework that revolves around a concept of generic safety requirements, known as Safety Integrity Levels (SILs). Following these guidelines, safety can be justified through satisfaction of SILs. Careful examination of these standards suggests that despite the noticeable differences, there are converging aspects. This thesis elicits the common elements found in safety standards and defines a pattern for the development of safety cases for cross-sector application. It also establishes a metamodel that connects parts of the safety case with the target system architecture and model-based safety analysis methods. This enables the semi- automatic construction and maintenance of safety arguments that help mitigate problems related to manual approaches. Specifically, the proposed metamodel incorporates system modelling, failure information, model-based safety analysis and optimisation techniques to allocate requirements in the form of SILs. The system architecture and the allocated requirements along with a user-defined safety argument pattern, which describes the target argument structure, enable the instantiation algorithm to automatically generate the corresponding safety argument. The idea behind model-connected safety cases stemmed from a critical literature review on safety standards and practices related to safety cases. The thesis presents the method, and implemented framework, in detail and showcases the different phases and outcomes via a simple example. It then applies the method on a case study based on the Boeing 787’s brake system and evaluates the resulting argument against certain criteria, such as scalability. Finally, contributions compared to traditional approaches are laid out

    Challenges in the Safety-Security Co-Assurance of Collaborative Industrial Robots

    Full text link
    The coordinated assurance of interrelated critical properties, such as system safety and cyber-security, is one of the toughest challenges in critical systems engineering. In this chapter, we summarise approaches to the coordinated assurance of safety and security. Then, we highlight the state of the art and recent challenges in human-robot collaboration in manufacturing both from a safety and security perspective. We conclude with a list of procedural and technological issues to be tackled in the coordinated assurance of collaborative industrial robots.Comment: 23 pages, 4 tables, 1 figur

    Security Risk Management for the Internet of Things

    Get PDF
    In recent years, the rising complexity of Internet of Things (IoT) systems has increased their potential vulnerabilities and introduced new cybersecurity challenges. In this context, state of the art methods and technologies for security risk assessment have prominent limitations when it comes to large scale, cyber-physical and interconnected IoT systems. Risk assessments for modern IoT systems must be frequent, dynamic and driven by knowledge about both cyber and physical assets. Furthermore, they should be more proactive, more automated, and able to leverage information shared across IoT value chains. This book introduces a set of novel risk assessment techniques and their role in the IoT Security risk management process. Specifically, it presents architectures and platforms for end-to-end security, including their implementation based on the edge/fog computing paradigm. It also highlights machine learning techniques that boost the automation and proactiveness of IoT security risk assessments. Furthermore, blockchain solutions for open and transparent sharing of IoT security information across the supply chain are introduced. Frameworks for privacy awareness, along with technical measures that enable privacy risk assessment and boost GDPR compliance are also presented. Likewise, the book illustrates novel solutions for security certification of IoT systems, along with techniques for IoT security interoperability. In the coming years, IoT security will be a challenging, yet very exciting journey for IoT stakeholders, including security experts, consultants, security research organizations and IoT solution providers. The book provides knowledge and insights about where we stand on this journey. It also attempts to develop a vision for the future and to help readers start their IoT Security efforts on the right foot

    Trust and transparency in an age of surveillance

    Get PDF
    Investigating the theoretical and empirical relationships between transparency and trust in the context of surveillance, this volume argues that neither transparency nor trust provides a simple and self-evident path for mitigating the negative political and social consequences of state surveillance practices. Dominant in both the scholarly literature and public debate is the conviction that transparency can promote better-informed decisions, provide greater oversight, and restore trust damaged by the secrecy of surveillance. The contributions to this volume challenge this conventional wisdom by considering how relations of trust and policies of transparency are modulated by underlying power asymmetries, sociohistorical legacies, economic structures, and institutional constraints. They study trust and transparency as embedded in specific sociopolitical contexts to show how, under certain conditions, transparency can become a tool of social control that erodes trust, while mistrust - rather than trust - can sometimes offer the most promising approach to safeguarding rights and freedom in an age of surveillance. The first book addressing the interrelationship of trust, transparency, and surveillance practices, this volume will be of interest to scholars and students of surveillance studies as well as appeal to an interdisciplinary audience given the contributions from political science, sociology, philosophy, law, and civil society

    Dagstuhl Reports : Volume 1, Issue 2, February 2011

    Get PDF
    Online Privacy: Towards Informational Self-Determination on the Internet (Dagstuhl Perspectives Workshop 11061) : Simone Fischer-Hübner, Chris Hoofnagle, Kai Rannenberg, Michael Waidner, Ioannis Krontiris and Michael Marhöfer Self-Repairing Programs (Dagstuhl Seminar 11062) : Mauro Pezzé, Martin C. Rinard, Westley Weimer and Andreas Zeller Theory and Applications of Graph Searching Problems (Dagstuhl Seminar 11071) : Fedor V. Fomin, Pierre Fraigniaud, Stephan Kreutzer and Dimitrios M. Thilikos Combinatorial and Algorithmic Aspects of Sequence Processing (Dagstuhl Seminar 11081) : Maxime Crochemore, Lila Kari, Mehryar Mohri and Dirk Nowotka Packing and Scheduling Algorithms for Information and Communication Services (Dagstuhl Seminar 11091) Klaus Jansen, Claire Mathieu, Hadas Shachnai and Neal E. Youn

    Formal methods and digital systems validation for airborne systems

    Get PDF
    This report has been prepared to supplement a forthcoming chapter on formal methods in the FAA Digital Systems Validation Handbook. Its purpose is as follows: to outline the technical basis for formal methods in computer science; to explain the use of formal methods in the specification and verification of software and hardware requirements, designs, and implementations; to identify the benefits, weaknesses, and difficulties in applying these methods to digital systems used on board aircraft; and to suggest factors for consideration when formal methods are offered in support of certification. These latter factors assume the context for software development and assurance described in RTCA document DO-178B, 'Software Considerations in Airborne Systems and Equipment Certification,' Dec. 1992
    • …
    corecore