647 research outputs found

    Minimum Violation Control Synthesis on Cyber-Physical Systems under Attacks

    Full text link
    Cyber-physical systems are conducting increasingly complex tasks, which are often modeled using formal languages such as temporal logic. The system's ability to perform the required tasks can be curtailed by malicious adversaries that mount intelligent attacks. At present, however, synthesis in the presence of such attacks has received limited research attention. In particular, the problem of synthesizing a controller when the required specifications cannot be satisfied completely due to adversarial attacks has not been studied. In this paper, we focus on the minimum violation control synthesis problem under linear temporal logic constraints of a stochastic finite state discrete-time system with the presence of an adversary. A minimum violation control strategy is one that satisfies the most important tasks defined by the user while violating the less important ones. We model the interaction between the controller and adversary using a concurrent Stackelberg game and present a nonlinear programming problem to formulate and solve for the optimal control policy. To reduce the computation effort, we develop a heuristic algorithm that solves the problem efficiently and demonstrate our proposed approach using a numerical case study

    Formal Methods for Autonomous Systems

    Full text link
    Formal methods refer to rigorous, mathematical approaches to system development and have played a key role in establishing the correctness of safety-critical systems. The main building blocks of formal methods are models and specifications, which are analogous to behaviors and requirements in system design and give us the means to verify and synthesize system behaviors with formal guarantees. This monograph provides a survey of the current state of the art on applications of formal methods in the autonomous systems domain. We consider correct-by-construction synthesis under various formulations, including closed systems, reactive, and probabilistic settings. Beyond synthesizing systems in known environments, we address the concept of uncertainty and bound the behavior of systems that employ learning using formal methods. Further, we examine the synthesis of systems with monitoring, a mitigation technique for ensuring that once a system deviates from expected behavior, it knows a way of returning to normalcy. We also show how to overcome some limitations of formal methods themselves with learning. We conclude with future directions for formal methods in reinforcement learning, uncertainty, privacy, explainability of formal methods, and regulation and certification

    Designing Trustworthy Autonomous Systems

    Get PDF
    The design of autonomous systems is challenging and ensuring their trustworthiness can have different meanings, such as i) ensuring consistency and completeness of the requirements by a correct elicitation and formalization process; ii) ensuring that requirements are correctly mapped to system implementations so that any system behaviors never violate its requirements; iii) maximizing the reuse of available components and subsystems in order to cope with the design complexity; and iv) ensuring correct coordination of the system with its environment.Several techniques have been proposed over the years to cope with specific problems. However, a holistic design framework that, leveraging on existing tools and methodologies, practically helps the analysis and design of autonomous systems is still missing. This thesis explores the problem of building trustworthy autonomous systems from different angles. We have analyzed how current approaches of formal verification can provide assurances: 1) to the requirement corpora itself by formalizing requirements with assume/guarantee contracts to detect incompleteness and conflicts; 2) to the reward function used to then train the system so that the requirements do not get misinterpreted; 3) to the execution of the system by run-time monitoring and enforcing certain invariants; 4) to the coordination of the system with other external entities in a system of system scenario and 5) to system behaviors by automatically synthesize a policy which is correct

    SAFETY-GUARANTEED TASK PLANNING FOR BIPEDAL NAVIGATION IN PARTIALLY OBSERVABLE ENVIRONMENTS

    Get PDF
    Bipedal robots are becoming more capable as basic hardware and control challenges are being overcome, however reasoning about safety at the task and motion planning levels has been largely underexplored. I would like to make key steps towards guaranteeing safe locomotion in cluttered environments in the presence of humans or other dynamic obstacles by designing a hierarchical task planning framework that incorporates safety guarantees at each level. This layered planning framework is composed of a coarse high-level symbolic navigation planner and a lower-level local action planner. A belief abstraction at the global navigation planning level enables belief estimation of non-visible dynamic obstacle states and guarantees navigation safety with collision avoidance. Both planning layers employ linear temporal logic for a reactive game synthesis between the robot and its environment while incorporating lower level safe locomotion keyframe policies into formal task specification design. The high-level symbolic navigation planner has been extended to leverage the capabilities of a heterogeneous multi-agent team to resolve environment assumption violations that appear at runtime. Modifications in the navigation planner in conjunction with a coordination layer allow each agent to guarantee immediate safety and eventual task completion in the presence of an assumption violation if another agent exists that can resolve said violation, e.g. a door is closed that another dexterous agent can open. The planning framework leverages the expressive nature and formal guarantees of LTL to generate provably correct controllers for complex robotic systems. The use of belief space planning for dynamic obstacle belief tracking and heterogeneous robot capabilities to assist one another when environment assumptions are violated allows the planning framework to reduce the conservativeness traditionally associated with using formal methods for robot planning.M.S
    • …
    corecore