103,729 research outputs found

    Linear logic as a tool for planning under temporal uncertainty

    Get PDF
    AbstractThe typical AI problem is that of making a plan of the actions to be performed by a controller so that it could get into a set of final situations, if it started with a certain initial situation.The plans, and related winning strategies, happen to be finite in the case of a finite number of states and a finite number of instant actions.The situation becomes much more complex when we deal with planning under temporal uncertainty caused by actions with delayed effects.Here we introduce a tree-based formalism to express plans, or winning strategies, in finite state systems in which actions may have quantitatively delayed effects. Since the delays are non-deterministic and continuous, we need an infinite branching to display all possible delays. Nevertheless, under reasonable assumptions, we show that infinite winning strategies which may arise in this context can be captured by finite plans.The above planning problem is specified in logical terms within a Horn fragment of affine logic. Among other things, the advantage of linear logic approach is that we can easily capture ‘preemptive/anticipative’ plans (in which a new action β may be taken at some moment within the running time of an action α being carried out, in order to be prepared before completion of action α).In this paper we propose a comprehensive and adequate logical model of strong planning under temporal uncertainty which addresses infinity concerns. In particular, we establish a direct correspondence between linear logic proofs and plans, or winning strategies, for the actions with quantitative delayed effects

    Stochastic Robustness Interval for Motion Planning with Signal Temporal Logic

    Full text link
    In this work, we present a novel robustness measure for continuous-time stochastic trajectories with respect to Signal Temporal Logic (STL) specifications. We show the soundness of the measure and develop a monitor for reasoning about partial trajectories. Using this monitor, we introduce an STL sampling-based motion planning algorithm for robots under uncertainty. Given a minimum robustness requirement, this algorithm finds satisfying motion plans; alternatively, the algorithm also optimizes for the measure. We prove probabilistic completeness and asymptotic optimality, and demonstrate the effectiveness of our approach on several case studies

    Formal methods paradigms for estimation and machine learning in dynamical systems

    Get PDF
    Formal methods are widely used in engineering to determine whether a system exhibits a certain property (verification) or to design controllers that are guaranteed to drive the system to achieve a certain property (synthesis). Most existing techniques require a large amount of accurate information about the system in order to be successful. The methods presented in this work can operate with significantly less prior information. In the domain of formal synthesis for robotics, the assumptions of perfect sensing and perfect knowledge of system dynamics are unrealistic. To address this issue, we present control algorithms that use active estimation and reinforcement learning to mitigate the effects of uncertainty. In the domain of cyber-physical system analysis, we relax the assumption that the system model is known and identify system properties automatically from execution data. First, we address the problem of planning the path of a robot under temporal logic constraints (e.g. "avoid obstacles and periodically visit a recharging station") while simultaneously minimizing the uncertainty about the state of an unknown feature of the environment (e.g. locations of fires after a natural disaster). We present synthesis algorithms and evaluate them via simulation and experiments with aerial robots. Second, we develop a new specification language for tasks that require gathering information about and interacting with a partially observable environment, e.g. "Maintain localization error below a certain level while also avoiding obstacles.'' Third, we consider learning temporal logic properties of a dynamical system from a finite set of system outputs. For example, given maritime surveillance data we wish to find the specification that corresponds only to those vessels that are deemed law-abiding. Algorithms for performing off-line supervised and unsupervised learning and on-line supervised learning are presented. Finally, we consider the case in which we want to steer a system with unknown dynamics to satisfy a given temporal logic specification. We present a novel reinforcement learning paradigm to solve this problem. Our procedure gives "partial credit'' for executions that almost satisfy the specification, which can lead to faster convergence rates and produce better solutions when the specification is not satisfiable

    From Uncertainty Data to Robust Policies for Temporal Logic Planning

    Full text link
    We consider the problem of synthesizing robust disturbance feedback policies for systems performing complex tasks. We formulate the tasks as linear temporal logic specifications and encode them into an optimization framework via mixed-integer constraints. Both the system dynamics and the specifications are known but affected by uncertainty. The distribution of the uncertainty is unknown, however realizations can be obtained. We introduce a data-driven approach where the constraints are fulfilled for a set of realizations and provide probabilistic generalization guarantees as a function of the number of considered realizations. We use separate chance constraints for the satisfaction of the specification and operational constraints. This allows us to quantify their violation probabilities independently. We compute disturbance feedback policies as solutions of mixed-integer linear or quadratic optimization problems. By using feedback we can exploit information of past realizations and provide feasibility for a wider range of situations compared to static input sequences. We demonstrate the proposed method on two robust motion-planning case studies for autonomous driving

    Negotiating the Probabilistic Satisfaction of Temporal Logic Motion Specifications

    Full text link
    We propose a human-supervised control synthesis method for a stochastic Dubins vehicle such that the probability of satisfying a specification given as a formula in a fragment of Probabilistic Computational Tree Logic (PCTL) over a set of environmental properties is maximized. Under some mild assumptions, we construct a finite approximation for the motion of the vehicle in the form of a tree-structured Markov Decision Process (MDP). We introduce an efficient algorithm, which exploits the tree structure of the MDP, for synthesizing a control policy that maximizes the probability of satisfaction. For the proposed PCTL fragment, we define the specification update rules that guarantee the increase (or decrease) of the satisfaction probability. We introduce an incremental algorithm for synthesizing an updated MDP control policy that reuses the initial solution. The initial specification can be updated, using the rules, until the supervisor is satisfied with both the updated specification and the corresponding satisfaction probability. We propose an offline and an online application of this method.Comment: 9 pages, 4 figures; The results in this paper were presented without proofs in IEEE/RSJ International Conference on Intelligent Robots and Systems November 3-7, 2013 at Tokyo Big Sight, Japa

    Qualitative Analysis of POMDPs with Temporal Logic Specifications for Robotics Applications

    Get PDF
    We consider partially observable Markov decision processes (POMDPs), that are a standard framework for robotics applications to model uncertainties present in the real world, with temporal logic specifications. All temporal logic specifications in linear-time temporal logic (LTL) can be expressed as parity objectives. We study the qualitative analysis problem for POMDPs with parity objectives that asks whether there is a controller (policy) to ensure that the objective holds with probability 1 (almost-surely). While the qualitative analysis of POMDPs with parity objectives is undecidable, recent results show that when restricted to finite-memory policies the problem is EXPTIME-complete. While the problem is intractable in theory, we present a practical approach to solve the qualitative analysis problem. We designed several heuristics to deal with the exponential complexity, and have used our implementation on a number of well-known POMDP examples for robotics applications. Our results provide the first practical approach to solve the qualitative analysis of robot motion planning with LTL properties in the presence of uncertainty

    Toward Specification-Guided Active Mars Exploration for Cooperative Robot Teams

    Get PDF
    As a step towards achieving autonomy in space exploration missions, we consider a cooperative robotics system consisting of a copter and a rover. The goal of the copter is to explore an unknown environment so as to maximize knowledge about a science mission expressed in linear temporal logic that is to be executed by the rover. We model environmental uncertainty as a belief space Markov decision process and formulate the problem as a two-step stochastic dynamic program that we solve in a way that leverages the decomposed nature of the overall system. We demonstrate in simulations that the robot team makes intelligent decisions in the face of uncertainty
    • …
    corecore