1,175 research outputs found

    Learning Expressive Quadrupedal Locomotion Guided by Kinematic Trajectory Generator

    Get PDF
    Biological quadrupedal systems exhibit a wider range of locomotion skills. In Robotics, quadrupedal systems only exhibit a limited range of locomotion skills. They can be very robust for a single locomotion task, and state-of-the-art algorithms have been designed for walking gaits or use individual policies trained for a single skill. This thesis aimed to study the design of an expressive locomotion controller (different locomotion skills in one policy) for a quadrupedal robot. Different approaches based on Deep Reinforcement Learning have been studied for their recent successes in Robotics and Computer animation. A reference-free and a reference-based approach using solely reward shaping, i.e. specification of the motion through the reward, have been implemented. They produced walking gaits in simulation. Yet, the motions produced by the reference-based approach had limited footstep height and balance issues. The reference-free approach had higher footsteps and fewer base oscillations. Yet, both approaches are hard to adapt when it comes to expressiveness since the motion specification is solely done through reward shaping, which is not intuitive. Finally, inspired by works in computer animation and robotics, an approach based on motion clips for motion specification and general motion tracking has been implemented and produced more natural motions in simulation, i.e. higher footsteps, bigger strides, more base stability hard to generate using reward shaping.M.S

    Neural networks: from the perceptron to deep nets

    Full text link
    Artificial networks have been studied through the prism of statistical mechanics as disordered systems since the 80s, starting from the simple models of Hopfield's associative memory and the single-neuron perceptron classifier. Assuming data is generated by a teacher model, asymptotic generalisation predictions were originally derived using the replica method and the online learning dynamics has been described in the large system limit. In this chapter, we review the key original ideas of this literature along with their heritage in the ongoing quest to understand the efficiency of modern deep learning algorithms. One goal of current and future research is to characterize the bias of the learning algorithms toward well-generalising minima in a complex overparametrized loss landscapes with many solutions perfectly interpolating the training data. Works on perceptrons, two-layer committee machines and kernel-like learning machines shed light on these benefits of overparametrization. Another goal is to understand the advantage of depth while models now commonly feature tens or hundreds of layers. If replica computations apparently fall short in describing general deep neural networks learning, studies of simplified linear or untrained models, as well as the derivation of scaling laws provide the first elements of answers.Comment: Contribution to the book Spin Glass Theory and Far Beyond: Replica Symmetry Breaking after 40 Years; Chap. 2
    corecore