6,508 research outputs found

    Trajectory Reconstruction Techniques for Evaluation of ATC Systems

    Get PDF
    This paper is focused on trajectory reconstruction techniques for evaluating ATC systems, using real data of recorded opportunity traffic. We analyze different alternatives for this problem, from traditional interpolation approaches based on curve fitting to our proposed schemes based on modeling regular motion patterns with optimal smoothers. The extraction of trajectory features such as motion type (or mode of flight), maneuvers profile, geometric parameters, etc., allows a more accurate computation of the curve and the detailed evaluation of the data processors used in the ATC centre. Different alternatives will be compared with some performance results obtained with simulated and real data sets

    Developmental acquisition of entrainment skills in robot swinging using van der Pol oscillators

    Get PDF
    In this study we investigated the effects of different morphological configurations on a robot swinging task using van der Pol oscillators. The task was examined using two separate degrees of freedom (DoF), both in the presence and absence of neural entrainment. Neural entrainment stabilises the system, reduces time-to-steady state and relaxes the requirement for a strong coupling with the environment in order to achieve mechanical entrainment. It was found that staged release of the distal DoF does not have any benefits over using both DoF from the onset of the experimentation. On the contrary, it is less efficient, both with respect to the time needed to reach a stable oscillatory regime and the maximum amplitude it can achieve. The same neural architecture is successful in achieving neuromechanical entrainment for a robotic walking task

    Overcoming Exploration in Reinforcement Learning with Demonstrations

    Full text link
    Exploration in environments with sparse rewards has been a persistent problem in reinforcement learning (RL). Many tasks are natural to specify with a sparse reward, and manually shaping a reward function can result in suboptimal performance. However, finding a non-zero reward is exponentially more difficult with increasing task horizon or action dimensionality. This puts many real-world tasks out of practical reach of RL methods. In this work, we use demonstrations to overcome the exploration problem and successfully learn to perform long-horizon, multi-step robotics tasks with continuous control such as stacking blocks with a robot arm. Our method, which builds on top of Deep Deterministic Policy Gradients and Hindsight Experience Replay, provides an order of magnitude of speedup over RL on simulated robotics tasks. It is simple to implement and makes only the additional assumption that we can collect a small set of demonstrations. Furthermore, our method is able to solve tasks not solvable by either RL or behavior cloning alone, and often ends up outperforming the demonstrator policy.Comment: 8 pages, ICRA 201
    • …
    corecore