640 research outputs found

    Topology-Guided Path Integral Approach for Stochastic Optimal Control in Cluttered Environment

    Full text link
    This paper addresses planning and control of robot motion under uncertainty that is formulated as a continuous-time, continuous-space stochastic optimal control problem, by developing a topology-guided path integral control method. The path integral control framework, which forms the backbone of the proposed method, re-writes the Hamilton-Jacobi-Bellman equation as a statistical inference problem; the resulting inference problem is solved by a sampling procedure that computes the distribution of controlled trajectories around the trajectory by the passive dynamics. For motion control of robots in a highly cluttered environment, however, this sampling can easily be trapped in a local minimum unless the sample size is very large, since the global optimality of local minima depends on the degree of uncertainty. Thus, a homology-embedded sampling-based planner that identifies many (potentially) local-minimum trajectories in different homology classes is developed to aid the sampling process. In combination with a receding-horizon fashion of the optimal control the proposed method produces a dynamically feasible and collision-free motion plans without being trapped in a local minimum. Numerical examples on a synthetic toy problem and on quadrotor control in a complex obstacle field demonstrate the validity of the proposed method.Comment: arXiv admin note: text overlap with arXiv:1510.0534

    Graceful Navigation for Mobile Robots in Dynamic and Uncertain Environments.

    Full text link
    The ability to navigate in everyday environments is a fundamental and necessary skill for any autonomous mobile agent that is intended to work with human users. The presence of pedestrians and other dynamic objects, however, makes the environment inherently dynamic and uncertain. To navigate in such environments, an agent must reason about the near future and make an optimal decision at each time step so that it can move safely toward the goal. Furthermore, for any application intended to carry passengers, it also must be able to move smoothly and comfortably, and the robot behavior needs to be customizable to match the preference of the individual users. Despite decades of progress in the field of motion planning and control, this remains a difficult challenge with existing methods. In this dissertation, we show that safe, comfortable, and customizable mobile robot navigation in dynamic and uncertain environments can be achieved via stochastic model predictive control. We view the problem of navigation in dynamic and uncertain environments as a continuous decision making process, where an agent with short-term predictive capability reasons about its situation and makes an informed decision at each time step. The problem of robot navigation in dynamic and uncertain environments is formulated as an on-line, finite-horizon policy and trajectory optimization problem under uncertainty. With our formulation, planning and control becomes fully integrated, which allows direct optimization of the performance measure. Furthermore, with our approach the problem becomes easy to solve, which allows our algorithm to run in real time on a single core of a typical laptop with off-the-shelf optimization packages. The work presented in this thesis extends the state-of-the-art in analytic control of mobile robots, sampling-based optimal path planning, and stochastic model predictive control. We believe that our work is a significant step toward safe and reliable autonomous navigation that is acceptable to human users.PhDMechanical EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/120760/1/jongjinp_1.pd

    Receding-horizon motion planning of quadrupedal robot locomotion

    Get PDF
    Quadrupedal robots are designed to offer efficient and robust mobility on uneven terrain. This thesis investigates combining numerical optimization and machine learning methods to achieve interpretable kinodynamic planning of natural and agile locomotion. The proposed algorithm, called Receding-Horizon Experience-Controlled Adaptive Legged Locomotion (RHECALL), uses nonlinear programming (NLP) with learned initialization to produce long-horizon, high-fidelity, terrain-aware, whole-body trajectories. RHECALL has been implemented and validated on the ANYbotics ANYmal B and C quadrupeds on complex terrain. The proposed optimal control problem formulation uses the single-rigid-body dynamics (SRBD) model and adopts a direct collocation transcription method which enables the discovery of aperiodic contact sequences. To generate reliable trajectories, we propose fast-to-compute analytical costs that leverage the discretization and terrain-dependent kinematic constraints. To extend the formulation to receding-horizon planning, we propose a segmentation approach with asynchronous centre of mass (COM) and end-effector timings and a heuristic initialization scheme which reuses the previous solution. We integrate real-time 2.5D perception data for online foothold selection. Additionally, we demonstrate that a learned stability criterion can be incorporated into the planning framework. To accelerate the convergence of the NLP solver to locally optimal solutions, we propose data-driven initialization schemes trained using supervised and unsupervised behaviour cloning. We demonstrate the computational advantage of the schemes and the ability to leverage latent space to reconstruct dynamic segments of plans which are several seconds long. Finally, in order to apply RHECALL to quadrupeds with significant leg inertias, we derive the more accurate lump leg single-rigid-body dynamics (LL-SRBD) and centroidal dynamics (CD) models and their first-order partial derivatives. To facilitate intuitive usage of costs, constraints and initializations, we parameterize these models by Euclidean-space variables. We show the models have the ability to shape rotational inertia of the robot which offers potential to further improve agility

    Reinforcement Learning: A Survey

    Full text link
    This paper surveys the field of reinforcement learning from a computer-science perspective. It is written to be accessible to researchers familiar with machine learning. Both the historical basis of the field and a broad selection of current work are summarized. Reinforcement learning is the problem faced by an agent that learns behavior through trial-and-error interactions with a dynamic environment. The work described here has a resemblance to work in psychology, but differs considerably in the details and in the use of the word ``reinforcement.'' The paper discusses central issues of reinforcement learning, including trading off exploration and exploitation, establishing the foundations of the field via Markov decision theory, learning from delayed reinforcement, constructing empirical models to accelerate learning, making use of generalization and hierarchy, and coping with hidden state. It concludes with a survey of some implemented systems and an assessment of the practical utility of current methods for reinforcement learning.Comment: See http://www.jair.org/ for any accompanying file

    Sensor Fusion and Obstacle Avoidance for an Unmanned Ground Vehicle

    Get PDF
    In recent years, the capabilities and potential value of unmanned autonomous systems (UAS) to perform an extensive variety of missions have significantly increased. It is well comprehended that there are various challenges associated with the realization of autonomous operations in complex urban environments. These difficulties include the requirement for precision guidance and control in conceivably GPS-denied conditions as well as the need to sense and avoid stationary and moving obstructions within the scene. The small size of some of these vehicles restricts the size, weight and power consumption of the sensor payload and onboard computational processing that can accommodated by UAS. This thesis analyzes the development and implementation of terrain mapping, path planning and control algorithms on an unmanned ground vehicle. Data from GPS, IMU and LIDAR sensors are fused in order to compute and update a dense 3D point cloud that is used by an implicit terrain algorithm to provide detailed mathematical representations of complex 3D structures generally found in urban environments. A receding horizon path planning algorithm is employed to adaptively produce a kinematically-feasible path for the unmanned ground vehicle. This path planning algorithm incorporates obstacle avoidance constraints and provides a set of waypoints to be followed by the unmanned ground vehicle. A waypoint controller is designed and implemented to enable the vehicle to follow the waypoints from the path planner. Open-loop experiments are provided with an unmanned ground vehicle in order to demonstrate terrain generation with real sensor data. Closed-loop results are then presented for a simulated ground vehicle in order to demonstrate the performance of the receding horizon path planning and control algorithms using the terrain map generated from the open-loop experiments

    Learning vision-based agile flight: From simulation to the real world

    Get PDF

    Formal methods paradigms for estimation and machine learning in dynamical systems

    Get PDF
    Formal methods are widely used in engineering to determine whether a system exhibits a certain property (verification) or to design controllers that are guaranteed to drive the system to achieve a certain property (synthesis). Most existing techniques require a large amount of accurate information about the system in order to be successful. The methods presented in this work can operate with significantly less prior information. In the domain of formal synthesis for robotics, the assumptions of perfect sensing and perfect knowledge of system dynamics are unrealistic. To address this issue, we present control algorithms that use active estimation and reinforcement learning to mitigate the effects of uncertainty. In the domain of cyber-physical system analysis, we relax the assumption that the system model is known and identify system properties automatically from execution data. First, we address the problem of planning the path of a robot under temporal logic constraints (e.g. "avoid obstacles and periodically visit a recharging station") while simultaneously minimizing the uncertainty about the state of an unknown feature of the environment (e.g. locations of fires after a natural disaster). We present synthesis algorithms and evaluate them via simulation and experiments with aerial robots. Second, we develop a new specification language for tasks that require gathering information about and interacting with a partially observable environment, e.g. "Maintain localization error below a certain level while also avoiding obstacles.'' Third, we consider learning temporal logic properties of a dynamical system from a finite set of system outputs. For example, given maritime surveillance data we wish to find the specification that corresponds only to those vessels that are deemed law-abiding. Algorithms for performing off-line supervised and unsupervised learning and on-line supervised learning are presented. Finally, we consider the case in which we want to steer a system with unknown dynamics to satisfy a given temporal logic specification. We present a novel reinforcement learning paradigm to solve this problem. Our procedure gives "partial credit'' for executions that almost satisfy the specification, which can lead to faster convergence rates and produce better solutions when the specification is not satisfiable
    • …
    corecore