59,650 research outputs found

    Real-Time Motion Planning of Legged Robots: A Model Predictive Control Approach

    Full text link
    We introduce a real-time, constrained, nonlinear Model Predictive Control for the motion planning of legged robots. The proposed approach uses a constrained optimal control algorithm known as SLQ. We improve the efficiency of this algorithm by introducing a multi-processing scheme for estimating value function in its backward pass. This pass has been often calculated as a single process. This parallel SLQ algorithm can optimize longer time horizons without proportional increase in its computation time. Thus, our MPC algorithm can generate optimized trajectories for the next few phases of the motion within only a few milliseconds. This outperforms the state of the art by at least one order of magnitude. The performance of the approach is validated on a quadruped robot for generating dynamic gaits such as trotting.Comment: 8 page

    Steering the Smart Grid

    Get PDF
    Increasing energy prices and the greenhouse effect lead to more awareness of energy efficiency of electricity supply. During the last years, a lot of technologies and optimization methodologies were developed to increase the efficiency, maintain the grid stability and support large scale introduction of renewable sources. In previous work, we showed the effectiveness of our three-step methodology to reach these objectives, consisting of 1) offline prediction, 2) offline planning and 3) online scheduling in combination with MPC. In this paper we analyse the best structure for distributing the steering signals in the third step. Simulations show that pricing signals work as good as on/off signals, but pricing signals are more general. Individual pricing signals per house perform better with small prediction errors while one global steering signal for a group of houses performs better when the prediction errors are larger. The best hierarchical structure is to use consumption patterns on all levels except the lowest level and deduct the pricing signals in the lowest node of the tree

    Combining Subgoal Graphs with Reinforcement Learning to Build a Rational Pathfinder

    Full text link
    In this paper, we present a hierarchical path planning framework called SG-RL (subgoal graphs-reinforcement learning), to plan rational paths for agents maneuvering in continuous and uncertain environments. By "rational", we mean (1) efficient path planning to eliminate first-move lags; (2) collision-free and smooth for agents with kinematic constraints satisfied. SG-RL works in a two-level manner. At the first level, SG-RL uses a geometric path-planning method, i.e., Simple Subgoal Graphs (SSG), to efficiently find optimal abstract paths, also called subgoal sequences. At the second level, SG-RL uses an RL method, i.e., Least-Squares Policy Iteration (LSPI), to learn near-optimal motion-planning policies which can generate kinematically feasible and collision-free trajectories between adjacent subgoals. The first advantage of the proposed method is that SSG can solve the limitations of sparse reward and local minima trap for RL agents; thus, LSPI can be used to generate paths in complex environments. The second advantage is that, when the environment changes slightly (i.e., unexpected obstacles appearing), SG-RL does not need to reconstruct subgoal graphs and replan subgoal sequences using SSG, since LSPI can deal with uncertainties by exploiting its generalization ability to handle changes in environments. Simulation experiments in representative scenarios demonstrate that, compared with existing methods, SG-RL can work well on large-scale maps with relatively low action-switching frequencies and shorter path lengths, and SG-RL can deal with small changes in environments. We further demonstrate that the design of reward functions and the types of training environments are important factors for learning feasible policies.Comment: 20 page
    corecore