2 research outputs found

    Designing Trustworthy Autonomous Systems

    Get PDF
    The design of autonomous systems is challenging and ensuring their trustworthiness can have different meanings, such as i) ensuring consistency and completeness of the requirements by a correct elicitation and formalization process; ii) ensuring that requirements are correctly mapped to system implementations so that any system behaviors never violate its requirements; iii) maximizing the reuse of available components and subsystems in order to cope with the design complexity; and iv) ensuring correct coordination of the system with its environment.Several techniques have been proposed over the years to cope with specific problems. However, a holistic design framework that, leveraging on existing tools and methodologies, practically helps the analysis and design of autonomous systems is still missing. This thesis explores the problem of building trustworthy autonomous systems from different angles. We have analyzed how current approaches of formal verification can provide assurances: 1) to the requirement corpora itself by formalizing requirements with assume/guarantee contracts to detect incompleteness and conflicts; 2) to the reward function used to then train the system so that the requirements do not get misinterpreted; 3) to the execution of the system by run-time monitoring and enforcing certain invariants; 4) to the coordination of the system with other external entities in a system of system scenario and 5) to system behaviors by automatically synthesize a policy which is correct

    Runtime Safety Analysis for Safe Reconfiguration

    No full text
    Modern technical systems are increasingly built to exhibit self-x properties as, e.g., self-healing or self-optimization. For this, they require adaptation at runtime. This is even true for embedded or mechatronic systems which often operate in safety- critical environments. There, the effects of the adaptation with respect to safety must be analyzed carefully. However, not all parameters needed for safety analyses, e.g., the concrete system architecture, are known at design time. Consequently, safety analyses need to be executed during runtime. Current approaches of runtime safety analysis typically react to anomalies that already occurred in the system. Thus, unsafe system states cannot be excluded completely. We present a runtime safety analysis that prevents system states with an unacceptable risk that have not yet occurred. For this, we generate the reachable component structures at runtime and analyze them with respect to risk. The system is modified such that component structures with an unacceptable risk are not reachable any more and are thus prevented
    corecore