2 research outputs found

    Inverse optimal homography-based visual servo control via an uncalibrated camera

    No full text

    Safety-aware model-based reinforcement learning using barrier transformation

    Get PDF
    The ability to learn and execute optimal control policies safely is critical to the realization of complex autonomy, especially where task restarts are not available and/or when the systems are safety-critical. Safety requirements are often expressed in terms of state and/or control constraints. Methods such as barrier transformation and control barrier functions have been successfully used for safe learning in systems under state constraints and/or control constraints, in conjunction with model-based reinforcement learning to learn the optimal control policy. However, existing barrier-based safe learning methods rely on fully known models and full state feedback. In this thesis, two different safe model-based reinforcement learning techniques are developed. One of the techniques utilizes a novel filtered concurrent learning method to realize simultaneous learning and control in the presence of model uncertainties for safety-critical systems, and the other technique utilizes a novel dynamic state estimator to realize simultaneous learning and control for safety-critical systems with a partially observable state. The applicability of the developed techniques is demonstrated through simulations, and to illustrate their effectiveness, comparative simulations are presented wherever alternate methods exist to solve the problem under consideration. The thesis concludes with a discussion about the limitations of the developed techniques. Extensions of the developed techniques are also proposed along with the possible approaches to achieve them
    corecore