7,041 research outputs found

    Motion Planning and Control of Dynamic Humanoid Locomotion

    Get PDF
    Inspired by human, humanoid robots has the potential to become a general-purpose platform that lives along with human. Due to the technological advances in many field, such as actuation, sensing, control and intelligence, it finally enables humanoid robots to possess human comparable capabilities. However, humanoid locomotion is still a challenging research field. The large number of degree of freedom structure makes the system difficult to coordinate online. The presence of various contact constraints and the hybrid nature of locomotion tasks make the planning a harder problem to solve. Template model anchoring approach has been adopted to bridge the gap between simple model behavior and the whole-body motion of humanoid robot. Control policies are first developed for simple template models like Linear Inverted Pendulum Model (LIPM) or Spring Loaded Inverted Pendulum(SLIP), the result controlled behaviors are then been mapped to the whole-body motion of humanoid robot through optimization-based task-space control strategies. Whole-body humanoid control framework has been verified on various contact situations such as unknown uneven terrain, multi-contact scenarios and moving platform and shows its generality and versatility. For walking motion, existing Model Predictive Control approach based on LIPM has been extended to enable the robot to walk without any reference foot placement anchoring. It is kind of discrete version of \u201cwalking without thinking\u201d. As a result, the robot could achieve versatile locomotion modes such as automatic foot placement with single reference velocity command, reactive stepping under large external disturbances, guided walking with small constant external pushing forces, robust walking on unknown uneven terrain, reactive stepping in place when blocked by external barrier. As an extension of this proposed framework, also to increase the push recovery capability of the humanoid robot, two new configurations have been proposed to enable the robot to perform cross-step motions. For more dynamic hopping and running motion, SLIP model has been chosen as the template model. Different from traditional model-based analytical approach, a data-driven approach has been proposed to encode the dynamics of the this model. A deep neural network is trained offline with a large amount of simulation data based on the SLIP model to learn its dynamics. The trained network is applied online to generate reference foot placements for the humanoid robot. Simulations have been performed to evaluate the effectiveness of the proposed approach in generating bio-inspired and robust running motions. The method proposed based on 2D SLIP model can be generalized to 3D SLIP model and the extension has been briefly mentioned at the end

    Footstep and Motion Planning in Semi-unstructured Environments Using Randomized Possibility Graphs

    Get PDF
    Traversing environments with arbitrary obstacles poses significant challenges for bipedal robots. In some cases, whole body motions may be necessary to maneuver around an obstacle, but most existing footstep planners can only select from a discrete set of predetermined footstep actions; they are unable to utilize the continuum of whole body motion that is truly available to the robot platform. Existing motion planners that can utilize whole body motion tend to struggle with the complexity of large-scale problems. We introduce a planning method, called the "Randomized Possibility Graph", which uses high-level approximations of constraint manifolds to rapidly explore the "possibility" of actions, thereby allowing lower-level motion planners to be utilized more efficiently. We demonstrate simulations of the method working in a variety of semi-unstructured environments. In this context, "semi-unstructured" means the walkable terrain is flat and even, but there are arbitrary 3D obstacles throughout the environment which may need to be stepped over or maneuvered around using whole body motions.Comment: Accepted by IEEE International Conference on Robotics and Automation 201

    Real-time biped character stepping

    Get PDF
    PhD ThesisA rudimentary biped activity that is essential in interactive evirtual worlds, such as video-games and training simulations, is stepping. For example, stepping is fundamental in everyday terrestrial activities that include walking and balance recovery. Therefore an effective 3D stepping control algorithm that is computationally fast and easy to implement is extremely valuable and important to character animation research. This thesis focuses on generating real-time controllable stepping motions on-the-fly without key-framed data that are responsive and robust (e.g.,can remain upright and balanced under a variety of conditions, such as pushes and dynami- cally changing terrain). In our approach, we control the character’s direction and speed by means of varying the stepposition and duration. Our lightweight stepping model is used to create coordinated full-body motions, which produce directable steps to guide the character with specific goals (e.g., following a particular path while placing feet at viable locations). We also create protective steps in response to random disturbances (e.g., pushes). Whereby, the system automatically calculates where and when to place the foot to remedy the disruption. In conclusion, the inverted pendulum has a number of limitations that we address and resolve to produce an improved lightweight technique that provides better control and stability using approximate feature enhancements, for instance, ankle-torque and elongated-body

    Footstep parameterized motion blending using barycentric coordinates

    Get PDF
    This paper presents a real-time animation system for fully embodied virtual humans that satisfies accurate foot placement constraints for different human walking and running styles. Our method offers a fine balance between motion fidelity and character control, and can efficiently animate over sixty agents in real time (25 FPS) and over a hundred characters at 13 FPS. Given a point cloud of reachable support foot configurations extracted from the set of available animation clips, we compute the Delaunay triangulation. At runtime, the triangulation is queried to obtain the simplex containing the next footstep, which is used to compute the barycentric blending weights of the animation clips. Our method synthesizes animations to accurately follow footsteps, and a simple IK solver adjusts small offsets, foot orientation, and handles uneven terrain. To incorporate root velocity fidelity, the method is further extended to include the parametric space of root movement and combine it with footstep based interpolation. The presented method is evaluated on a variety of test cases and error measurements are calculated to offer a quantitative analysis of the results achieved.Peer ReviewedPostprint (author’s final draft

    VIRTUAL ROBOT LOCOMOTION ON VARIABLE TERRAIN WITH ADVERSARIAL REINFORCEMENT LEARNING

    Get PDF
    Reinforcement Learning (RL) is a machine learning technique where an agent learns to perform a complex action by going through a repeated process of trial and error to maximize a well-defined reward function. This form of learning has found applications in robot locomotion where it has been used to teach robots to traverse complex terrain. While RL algorithms may work well in training robot locomotion, they tend to not generalize well when the agent is brought into an environment that it has never encountered before. Possible solutions from the literature include training a destabilizing adversary alongside the locomotive learning agent. The destabilizing adversary aims to destabilize the agent by applying external forces to it, which may help the locomotive agent learn to deal with unexpected scenarios. For this project, we will train a robust, simulated quadruped robot to traverse a variable terrain. We compare and analyze Proximal Policy Optimization (PPO) with and without the use of an adversarial agent, and determine which use of PPO produces the best results

    Motion Planning and Control for the Locomotion of Humanoid Robot

    Get PDF
    This thesis aims to contribute on the motion planning and control problem of the locomotion of humanoid robots. For the motion planning, various methods were proposed in different levels of model dependence. First, a model free approach was proposed which utilizes linear regression to estimate the relationship between foot placement and moving velocity. The data-based feature makes it quite robust to handle modeling error and external disturbance. As a generic control philosophy, it can be applied to various robots with different gaits. To reduce the risk of collecting experimental data of model-free method, based on the simplified linear inverted pendulum model, the classic planning method of model predictive control was explored to optimize CoM trajectory with predefined foot placements or optimize them two together with respect to the ZMP constraint. Along with elaborately designed re-planning algorithm and sparse discretization of trajectories, it is fast enough to run in real time and robust enough to resist external disturbance. Thereafter, nonlinear models are utilized for motion planning by performing forward simulation iteratively following the multiple shooting method. A walking pattern is predefined to fix most of the degrees of the robot, and only one decision variable, foot placement, is left in one motion plane and therefore able to be solved in milliseconds which is sufficient to run in real time. In order to track the planned trajectories and prevent the robot from falling over, diverse control strategies were proposed according to the types of joint actuators. CoM stabilizer was designed for the robots with position-controlled joints while quasi-static Cartesian impedance control and optimization-based full body torque control were implemented for the robots with torque-controlled joints. Various scenarios were set up to demonstrate the feasibility and robustness of the proposed approaches, like walking on uneven terrain, walking with narrow feet or straight leg, push recovery and so on

    CPG-RL: Learning Central Pattern Generators for Quadruped Locomotion

    Full text link
    In this letter, we present a method for integrating central pattern generators (CPGs), i.e. systems of coupled oscillators, into the deep reinforcement learning (DRL) framework to produce robust and omnidirectional quadruped locomotion. The agent learns to directly modulate the intrinsic oscillator setpoints (amplitude and frequency) and coordinate rhythmic behavior among different oscillators. This approach also allows the use of DRL to explore questions related to neuroscience, namely the role of descending pathways, interoscillator couplings, and sensory feedback in gait generation. We train our policies in simulation and perform a sim-to-real transfer to the Unitree A1 quadruped, where we observe robust behavior to disturbances unseen during training, most notably to a dynamically added 13.75 kg load representing 115% of the nominal quadruped mass. We test several different observation spaces based on proprioceptive sensing and show that our framework is deployable with no domain randomization and very little feedback, where along with the oscillator states, it is possible to provide only contact booleans in the observation space. Video results can be found at https://youtu.be/xqXHLzLsEV4.Comment: Accepted for IEEE Robotics and Automation Letters, September 202

    How do treadmill speed and terrain visibility influence neuromuscular control of guinea fowl locomotion?

    Get PDF
    Locomotor control mechanisms must flexibly adapt to both anticipated and unexpected terrain changes to maintain movement and avoid a fall. Recent studies revealed that ground birds alter movement in advance of overground obstacles, but not treadmill obstacles, suggesting context-dependent shifts in the use of anticipatory control. We hypothesized that differences between overground and treadmill obstacle negotiation relate to differences in visual sensory information, which influence the ability to execute anticipatory manoeuvres. We explored two possible explanations: (1) previous treadmill obstacles may have been visually imperceptible, as they were low contrast to the tread, and (2) treadmill obstacles are visible for a shorter time compared with runway obstacles, limiting time available for visuomotor adjustments. To investigate these factors, we measured electromyographic activity in eight hindlimb muscles of the guinea fowl (Numida meleagris, N=6) during treadmill locomotion at two speeds (0.7 and 1.3 m s−1) and three terrain conditions at each speed: (i) level, (ii) repeated 5 cm low-contrast obstacles (90% contrast, black/white). We hypothesized that anticipatory changes in muscle activity would be higher for (1) high-contrast obstacles and (2) the slower treadmill speed, when obstacle viewing time is longer. We found that treadmill speed significantly influenced obstacle negotiation strategy, but obstacle contrast did not. At the slower speed, we observed earlier and larger anticipatory increases in muscle activity and shifts in kinematic timing. We discuss possible visuomotor explanations for the observed context-dependent use of anticipatory strategies
    corecore