20 research outputs found

    Verification of Sigmoidal Artificial Neural Networks using iSAT

    Get PDF
    This paper presents an approach for verifying the behaviour of nonlinear Artificial Neural Networks (ANNs) found in cyber-physical safety-critical systems. We implement a dedicated interval constraint propagator for the sigmoid function into the SMT solver iSAT and compare this approach with a compositional approach encoding the sigmoid function by basic arithmetic features available in iSAT and an approximating approach. Our experimental results show that the dedicated and the compositional approach clearly outperform the approximating approach. Throughout all our benchmarks, the dedicated approach showed an equal or better performance compared to the compositional approach.Comment: In Proceedings SNR 2021, arXiv:2207.0439

    DLR Institute of Systems Engineering for Future Mobility - Technical Trustworthiness as a Basis for Highly Automated and Autonomous Systems

    Get PDF
    The newly established Institute of Systems Engineering for Future Mobility within the German Aerospace Center opened its doors at the beginning of 2022. Emerging from the former OFFIS Division Transportation after a two-year transition phase, the new institute can draw on more than thirty years of experience in the research field of safety-critical systems. With the transition to the DLR, the institute's new research roadmap focuses on technical trustworthiness for highly automated and autonomous systems. Within this field, the institute will develop new concepts, methods, and tools to support the integration and assurance of technical trustworthiness for automated and autonomous systems during their whole lifecycle – from the early development through verification, validation, and operation to updates of the systems in the field

    Towards Safe and Sustainable Autonomous Vehicles Using Environmentally-Friendly Criticality Metrics

    Get PDF
    This paper presents a mathematical analysis of several criticality metrics used for evaluating the safety of autonomous vehicles (AVs) and also proposes novel environmentally-friendly metrics with the scope of facilitating their selection by future researchers who want to evaluate both safety and the environmental impact of AVs. Regarding this, first, we investigate whether the criticality metrics which are used to quantify the severeness of critical situations in autonomous driving are well-defined and work as intended. In some cases, the well-definedness or the intendedness of the metrics will be apparent, but in other cases, we will present mathematical demonstrations of these properties as well as alternative novel formulas. Additionally, we also present details regarding optimality. Secondly, we propose several novel environmentally-friendly metrics as well as a novel environmentally-friendly criticality metric that combines the safety and environmental impact in a car-following scenario. Third, we discuss the possibility of applying these criticality metrics in artificial intelligence (AI) training such as reinforcement learning (RL) where they can be used as penalty terms such as negative reward components. Finally, we propose a way to apply some of the metrics in a simple car-following scenario and show in our simulation that AVs powered by petrol emitted the most carbon emissions (54.92 g of CO2), being followed closely by diesel-powered AVs (54.67 g of CO2) and then by grid-electricity-powered AVs (31.16 g of CO2). Meanwhile, the AVs powered by electricity from a green source, such as solar energy, had 0 g of CO2 emissions, encouraging future researchers and the industry to develop more actively sustainable methods and metrics for powering and evaluating the safety and environmental impact of AVs using green energy

    Towards a Congruent Interpretation of Traffic Rules for Automated Driving - Experiences and Challenges

    Get PDF
    The homologation of automated driving systems for public roads requires a rigorous safety case. Regulations of the United Nations demand to demonstrate the compliance of the developed system with local traffic rules. Hence, evidences for this have to be delivered by means of formal proofs, online monitoring, and other verification techniques in the safety case. In order for such methods to be applicable traffic rules have to be made machine-interpretable. However, that pursuit is highly challenging. This work reports on our practical experiences regarding the formalization of a non-trivial part of the German road traffic act. We identify a central issue when formalizing traffic rules within a development process, coined as the congruence problem, which is concerned with the semantic equality of the legal and system interpretation of traffic rules. As our main contribution, we delineate potential challenges arising from the congruence problem, hence impeding a congruent yet formal interpretation of traffic rules. Finally, we aim to initiate discussions by highlighting steps to partially address these challenges

    A REFERENCE ARCHITECTURE OF HUMAN CYBER-PHYSICAL SYSTEMS – PART III: SEMANTIC FOUNDATIONS

    Get PDF
    he design and analysis of multi-agent human cyber-physical systems in safety-critical or industry-critical domains calls for an adequate semantic foundation capable of exhaustively and rigorously describing all emergent effects in the joint dynamic behavior of the agents that are relevant to their safety and well-behavior. We present such a semantic foundation. This framework extends beyond previous approaches by extending the agent-local dynamic state beyond state components under direct control of the agent and belief about other agents (as previously suggested for understanding cooperative as well as rational behavior) to agent-local evidence and belief about the overall cooperative, competitive, or coopetitive game structure. We argue that this extension is necessary for rigorously analyzing systems of human cyber-physical systems because humans are known to employ cognitive replacement models of system dynamics that are both non-stationary and potentially incongruent. These replacement models induce visible and potentially harmful effects on their joint emergent behavior and the interaction with cyber-physical system components

    A REFERENCES ARCHITECTURE FOR HUMAN CYBER PHYSICAL SYSTEMS - PART II: FUNDAMENTAL DESIGN PRINCIPLES FOR HUMAN-CPS INTERACTION

    Get PDF
    As automation increases qualitatively and quantitatively in safety-critical human cyber-physical systems, it is becoming more and more challenging to increase the probability or ensure that human operators still perceive key artefacts and comprehend their roles in the system. In the companion paper, we proposed an abstract reference architecture capable of expressing all classes of system-level interactions in human cyber-physical systems. Here we demonstrate how this reference architecture supports the analysis of levels of communication between agents and helps to identify the potential for misunderstandings and misconceptions. We then develop a metamodel for safe human machine interaction. Therefore, we ask what type of information exchange must be supported on what level so that humans and systems can cooperate as a team, what is the criticality of exchanged information, what are timing requirements for such interactions, and how can we communicate highly critical information in a limited time frame in spite of the many sources of a distorted perception. We highlight shared stumbling blocks and illustrate shared design principles, which rest on established ontologies specific to particular application classes. In order to overcome the partial opacity of internal states of agents, we anticipate a key role of virtual twins of both human and technical cooperation partners for designing a suitable communicati

    A REFERENCE ARCHITECTURE OF HUMAN CYBER PHYSICAL SYSTEMS PART I: CONCEPTUAL STRUCTURE

    Get PDF
    We propose a reference architecture of safety-critical or industry-critical human cyber-physical systems (CPSs) capable of expressing essential classes of system-level interactions between CPS and humans relevant for the societal acceptance of such systems. To reach this quality gate, the expressivity of the model must go beyond classical viewpoints such as operational, functional, architectural views and views used for safety and security analysis. The model does so by incorporating elements of such systems for mutual introspections in situational awareness, capabilities, and intentions in order to enable a synergetic, trusted relation in the interaction of humans and CPSs, which we see as a prerequisite for their societal acceptance. The reference architecture is represented as a metamodel incorporating conceptual and behavioral semantic aspects. We illustrate the key concepts of the metamodel with examples from smart grids, cooperative autonomous driving, and crisis manage
    corecore