5 research outputs found

    Regular Policies in Abstract Dynamic Programming

    Get PDF
    We consider challenging dynamic programming models where the associated Bellman equation, and the value and policy iteration algorithms commonly exhibit complex and even pathological behavior. Our analysis is based on the new notion of regular policies. These are policies that are well-behaved with respect to value and policy iteration, and are patterned after proper policies, which are central in the theory of stochastic shortest path problems. We show that the optimal cost function over regular policies may have favorable value and policy iteration properties, which the optimal cost function over all policies need not have. We accordingly develop a unifying methodology to address long standing analytical and algorithmic issues in broad classes of undiscounted models, including stochastic and minimax shortest path problems, as well as positive cost, negative cost, risk-sensitive, and multiplicative cost problems

    Deep Reinforcement Learning for Event-Triggered Control

    Full text link
    Event-triggered control (ETC) methods can achieve high-performance control with a significantly lower number of samples compared to usual, time-triggered methods. These frameworks are often based on a mathematical model of the system and specific designs of controller and event trigger. In this paper, we show how deep reinforcement learning (DRL) algorithms can be leveraged to simultaneously learn control and communication behavior from scratch, and present a DRL approach that is particularly suitable for ETC. To our knowledge, this is the first work to apply DRL to ETC. We validate the approach on multiple control tasks and compare it to model-based event-triggering frameworks. In particular, we demonstrate that it can, other than many model-based ETC designs, be straightforwardly applied to nonlinear systems

    COMBINED ROBUST OPTIMAL DESIGN, PATH AND MOTION PLANNING FOR UNMANNED AERIAL VEHICLE SYSTEMS SUBJECT TO UNCERTAINTY

    Get PDF
    Unmanned system performance depends heavily on both how the system is planned to be operated and the design of the unmanned system, both of which can be heavily impacted by uncertainty. This dissertation presents methods for simultaneously optimizing both of these aspects of an unmanned system when subject to uncertainty. This simultaneous optimization under uncertainty of unmanned system design and planning is demonstrated in the context of optimizing the design and flight path of an unmanned aerial vehicle (UAV) subject to an unknown set of wind conditions. This dissertation explores optimizing the path of the UAV down to the level of determining flight trajectories accounting for the UAVs dynamics (motion planning) while simultaneously optimizing design. Uncertainty is considered from the robust (no probability distribution known) standpoint, with the capability to account for a general set of uncertain parameters that affects the UAVs performance. New methods are investigated for solving motion planning problems for UAVs, which are applied to the problem of mitigating the risk posed by UAVs flying over inhabited areas. A new approach to solving robust optimization problems is developed, which uses a combination of random sampling and worst case analysis. The new robust optimization approach is shown to efficiently solve robust optimization problems, even when existing robust optimization methods would fail. A new approach for robust optimal motion planning that considers a “black-box” uncertainty model is developed based off the new robust optimization approach. The new robust motion planning approach is shown to perform better under uncertainty than methods which do not use a “black-box” uncertainty model. A new method is developed for solving design and path planning optimization problems for unmanned systems with discrete (graph-based) path representations, which is then extended to work on motion planning problems. This design and motion planning approach is used within the new robust optimization approach to solve a robust design and motion planning optimization problem for a UAV. Results are presented comparing these methods against a design study using a DOE, which show that the proposed methods can be less computationally expensive than existing methods for design and motion planning problems
    corecore