734 research outputs found

    Energetics of an Inertia Coupled and Simple Rimless Wheel

    Get PDF
    It has been shown by others that it is theoretically possible for a walking robot to achieve a perfectly efficient gait. The simplest model capable of highly efficient walking motions is the Inertial Coupled Rimless (ICR) Wheel. To examine the dynamics of the ICR wheel, two related studies were done. To determine the lowest energy cost for the ICR wheel we examined one mechanism of energy loss, non-elastic deformation of the elastic elements. Quasi-static experimental tension tests determined that the minimal energy loss for our system was 8:4x10�4 Joules per cycle. A more realistic, high frequency test, showed that the energy loss increased by a factor of 9.16. The ICR wheel walks down a ramp which is assumed to be very at. But no surface in reality can be perfectly at. For a more realistic study, rough terrain is introduced to the ramp. To better understand the dynamics of the motion of the ICR wheel, a simple rimless (SR) wheel is examined on a ramp with roughness. The roughness of the ground is randomly generated but bounded in magnitude. The minimum angle of inclination required for a rimless wheel to walk down both smooth and rough ramps is determined. For the rimless wheel we examined with 5 legs, the minimum slope required for a rough surface is 12.4% higher than that required for a smooth surface, and for 10 legs, the minimum slope for a rough surface is 40.83% higher than the smooth surface. This work has formed the foundation for the design of an energy efficient walking robot and has given insight into its behavior over rough terrain

    Locomotion of jointed figures over complex terrain

    Get PDF
    Thesis (M.S.V.S.)--Massachusetts Institute of Technology, Dept. of Architecture, 1987.MICROFICHE COPY AVAILABLE IN ARCHIVES AND ROTCH.Bibliography: leaves 104-111.by Karl Sims.M.S.V.S

    Interactive simulation of stylized human locomotion

    Get PDF
    Animating natural human motion in dynamic environments is difficult because of complex geometric and physical interactions. Simulation provides an automatic solution to parts of this problem, but it needs control systems to produce lifelike motions. This paper describes the systematic computation of controllers that can reproduce a range of locomotion styles in interactive simulations. Given a reference motion that describes the desired style, a derived control system can reproduce that style in simulation and in new environments. Because it produces high-quality motions that are both geometrically and physically consistent with simulated surroundings, interactive animation systems could begin to use this approach along with more established kinematic methods.Singapore-MIT GAMBIT Game LabNational Science Foundation (U.S.) (Fellowship 2007043041)Pixar (Firm

    Streamlined sim-to-real transfer for deep-reinforcement learning in robotics locomotion

    Get PDF
    Legged robots possess superior mobility compared to other machines, yet designing controllers for them can be challenging. Classic control methods require engineers to distill their knowledge into controllers, which is time-consuming and limiting when approaching dynamic tasks in unknown environments. Conversely, learning- based methods that gather knowledge from data can potentially unlock the versatility of legged systems. In this thesis, we propose a novel approach called CPG-Actor, which incor- porates feedback into a fully differentiable Central Pattern Generator (CPG) formulation using neural networks and Deep-Reinforcement Learning (RL). This approach achieves approximately twenty times better training performance compared to previous methods and provides insights into the impact of training on the distribution of parameters in both the CPGs and MLP feedback network. Adopting Deep-RL to design controllers comes at the expense of gathering extensive data, typically done in simulation to reduce time. However, controllers trained with data collected in simulation often lose performance when deployed in the real world, referred to as the sim-to-real gap. To address this, we propose a new method called Extended Random Force Injection (ERFI), which randomizes only two parameters to allow for sim-to-real transfer of locomotion controllers. ERFI demonstrated high robustness when varying masses of the base, or attaching a manipulator arm to the robot during testing, and achieved competitive performance comparable to standard randomization techniques. Furthermore, we propose a new method called Roll-Drop to enhance the robustness of Deep-RL policies to observation noise. Roll-Drop introduces dropout during rollout, achieving an 80% success rate when tested with up to 25% noise injected in the observations. Finally, we adopted model-free controllers to enable omni-directional bipedal lo- comotion on point feet with a quadruped robot without any hardware modification or external support. Despite the limitations posed by the quadruped’s hardware, the study considers this a perfect benchmark task to assess the shortcomings of sim- to-real techniques and unlock future avenues for the legged robotics community. Overall, this thesis demonstrates the potential of learning-based methods to design dynamic and robust controllers for legged robots while limiting the effort needed for sim-to-real transfer

    Adaptive Tracking of a Single-Rigid-Body Character in Various Environments

    Full text link
    Since the introduction of DeepMimic [Peng et al. 2018], subsequent research has focused on expanding the repertoire of simulated motions across various scenarios. In this study, we propose an alternative approach for this goal, a deep reinforcement learning method based on the simulation of a single-rigid-body character. Using the centroidal dynamics model (CDM) to express the full-body character as a single rigid body (SRB) and training a policy to track a reference motion, we can obtain a policy that is capable of adapting to various unobserved environmental changes and controller transitions without requiring any additional learning. Due to the reduced dimension of state and action space, the learning process is sample-efficient. The final full-body motion is kinematically generated in a physically plausible way, based on the state of the simulated SRB character. The SRB simulation is formulated as a quadratic programming (QP) problem, and the policy outputs an action that allows the SRB character to follow the reference motion. We demonstrate that our policy, efficiently trained within 30 minutes on an ultraportable laptop, has the ability to cope with environments that have not been experienced during learning, such as running on uneven terrain or pushing a box, and transitions between learned policies, without any additional learning

    Learning-based methods for planning and control of humanoid robots

    Get PDF
    Nowadays, humans and robots are more and more likely to coexist as time goes by. The anthropomorphic nature of humanoid robots facilitates physical human-robot interaction, and makes social human-robot interaction more natural. Moreover, it makes humanoids ideal candidates for many applications related to tasks and environments designed for humans. No matter the application, an ubiquitous requirement for the humanoid is to possess proper locomotion skills. Despite long-lasting research, humanoid locomotion is still far from being a trivial task. A common approach to address humanoid locomotion consists in decomposing its complexity by means of a model-based hierarchical control architecture. To cope with computational constraints, simplified models for the humanoid are employed in some of the architectural layers. At the same time, the redundancy of the humanoid with respect to the locomotion task as well as the closeness of such a task to human locomotion suggest a data-driven approach to learn it directly from experience. This thesis investigates the application of learning-based techniques to planning and control of humanoid locomotion. In particular, both deep reinforcement learning and deep supervised learning are considered to address humanoid locomotion tasks in a crescendo of complexity. First, we employ deep reinforcement learning to study the spontaneous emergence of balancing and push recovery strategies for the humanoid, which represent essential prerequisites for more complex locomotion tasks. Then, by making use of motion capture data collected from human subjects, we employ deep supervised learning to shape the robot walking trajectories towards an improved human-likeness. The proposed approaches are validated on real and simulated humanoid robots. Specifically, on two versions of the iCub humanoid: iCub v2.7 and iCub v3
    • …
    corecore