4 research outputs found

    Motion Planning and Control of Dynamic Humanoid Locomotion

    Get PDF
    Inspired by human, humanoid robots has the potential to become a general-purpose platform that lives along with human. Due to the technological advances in many field, such as actuation, sensing, control and intelligence, it finally enables humanoid robots to possess human comparable capabilities. However, humanoid locomotion is still a challenging research field. The large number of degree of freedom structure makes the system difficult to coordinate online. The presence of various contact constraints and the hybrid nature of locomotion tasks make the planning a harder problem to solve. Template model anchoring approach has been adopted to bridge the gap between simple model behavior and the whole-body motion of humanoid robot. Control policies are first developed for simple template models like Linear Inverted Pendulum Model (LIPM) or Spring Loaded Inverted Pendulum(SLIP), the result controlled behaviors are then been mapped to the whole-body motion of humanoid robot through optimization-based task-space control strategies. Whole-body humanoid control framework has been verified on various contact situations such as unknown uneven terrain, multi-contact scenarios and moving platform and shows its generality and versatility. For walking motion, existing Model Predictive Control approach based on LIPM has been extended to enable the robot to walk without any reference foot placement anchoring. It is kind of discrete version of \u201cwalking without thinking\u201d. As a result, the robot could achieve versatile locomotion modes such as automatic foot placement with single reference velocity command, reactive stepping under large external disturbances, guided walking with small constant external pushing forces, robust walking on unknown uneven terrain, reactive stepping in place when blocked by external barrier. As an extension of this proposed framework, also to increase the push recovery capability of the humanoid robot, two new configurations have been proposed to enable the robot to perform cross-step motions. For more dynamic hopping and running motion, SLIP model has been chosen as the template model. Different from traditional model-based analytical approach, a data-driven approach has been proposed to encode the dynamics of the this model. A deep neural network is trained offline with a large amount of simulation data based on the SLIP model to learn its dynamics. The trained network is applied online to generate reference foot placements for the humanoid robot. Simulations have been performed to evaluate the effectiveness of the proposed approach in generating bio-inspired and robust running motions. The method proposed based on 2D SLIP model can be generalized to 3D SLIP model and the extension has been briefly mentioned at the end

    Representation and control of coordinated-motion tasks for human-robot systems

    Get PDF
    It is challenging for robots to perform various tasks in a human environment. This is because many human-centered tasks require coordination in both hands and may often involve cooperation with another human. Although human-centered tasks require different types of coordinated movements, most of the existing methodologies have focused only on specific types of coordination. This thesis aims at the description and control of coordinated-motion tasks for human-robot systems; i.e., humanoid robots as well as multi-robot and human-robot systems. First, for bimanually coordinated-motion tasks in dual-manipulator systems, we propose the Extended-Cooperative-Task-Space (ECTS) representation, which extends the existing Cooperative-Task-Space (CTS) representation based on the kinematic models for human bimanual movements in Biomechanics. The proposed ECTS representation can represent the whole spectrum of dual-arm motion/force coordination using two sets of ECTS motion/force variables in a unified manner. The type of coordination can be easily chosen by two meaningful coefficients, and during coordinated-motion tasks, each set of variables directly describes two different aspects of coordinated motion and force behaviors. Thus, the operator can specify coordinated-motion/force tasks more intuitively in high-level descriptions, and the specified tasks can be easily reused in other situations with greater flexibility. Moreover, we present consistent procedures of using the ECTS representation for task specifications in the upper-body and lower-body subsystems of humanoid robots in order to perform manipulation and locomotion tasks, respectively. Besides, we propose and discuss performance indices derived based on the ECTS representation, which can be used to evaluate and optimize the performance of any type of dual-arm manipulation tasks. We show that using the ECTS representation for specifying both dual-arm manipulation and biped locomotion tasks can greatly simplify the motion planning process, allowing the operator to focus on high-level descriptions of those tasks. Both upper-body and lower-body task specifications are demonstrated by specifying whole-body task examples on a Hubo II+ robot carrying out dual-arm manipulation as well as biped locomotion tasks in a simulation environment. We also present the results from experiments on a dual-arm robot (Baxter) for teleoperating various types of coordinated-motion tasks using a single 6D mouse interface. The specified upper- and lower-body tasks can be considered as coordinated motions with constraints. In order to express various constraints imposed across the whole-body, we discuss the modeling of whole-body structure and the computations for robotic systems having multiple kinematic chains. Then we present a whole-body controller formulated as a quadratic programming, which can take different types of constraints into account in a prioritized manner. We validate the whole-body controller based on the simulation results on a Hubo II+ robot performing specified whole-body task examples with a number of motion and force constraints as well as actuation limits. Lastly, we discuss an extension of the ECTS representation, called Hierarchical Extended-Cooperative-Task Space (H-ECTS) framework, which uses tree-structured graphical representations for coordinated-motion tasks of multi-robot and human-robot systems. The H-ECTS framework is validated by experimental results on two Baxter robots cooperating with each other as well as with an additional human partner

    Towards Robust Bipedal Locomotion:From Simple Models To Full-Body Compliance

    Get PDF
    Thanks to better actuator technologies and control algorithms, humanoid robots to date can perform a wide range of locomotion activities outside lab environments. These robots face various control challenges like high dimensionality, contact switches during locomotion and a floating-base nature which makes them fall all the time. A rich set of sensory inputs and a high-bandwidth actuation are often needed to ensure fast and effective reactions to unforeseen conditions, e.g., terrain variations, external pushes, slippages, unknown payloads, etc. State of the art technologies today seem to provide such valuable hardware components. However, regarding software, there is plenty of room for improvement. Locomotion planning and control problems are often treated separately in conventional humanoid control algorithms. The control challenges mentioned above are probably the main reason for such separation. Here, planning refers to the process of finding consistent open-loop trajectories, which may take arbitrarily long computations off-line. Control, on the other hand, should be done very fast online to ensure stability. In this thesis, we want to link planning and control problems again and enable for online trajectory modification in a meaningful way. First, we propose a new way of describing robot geometries like molecules which breaks the complexity of conventional models. We use this technique and derive a planning algorithm that is fast enough to be used online for multi-contact motion planning. Similarly, we derive 3LP, a simplified linear three-mass model for bipedal walking, which offers orders of magnitude faster computations than full mechanical models. Next, we focus more on walking and use the 3LP model to formulate online control algorithms based on the foot-stepping strategy. The method is based on model predictive control, however, we also propose a faster controller with time-projection that demonstrates a close performance without numerical optimizations. We also deploy an efficient implementation of inverse dynamics together with advanced sensor fusion and actuator control algorithms to ensure a precise and compliant tracking of the simplified 3LP trajectories. Extensive simulations and hardware experiments on COMAN robot demonstrate effectiveness and strengths of our method. This thesis goes beyond humanoid walking applications. We further use the developed modeling tools to analyze and understand principles of human locomotion. Our 3LP model can describe the exchange of energy between human limbs in walking to some extent. We use this property to propose a metabolic-cost model of human walking which successfully describes trends in various conditions. The intrinsic power of the 3LP model to generate walking gaits in all these conditions makes it a handy solution for walking control and gait analysis, despite being yet a simplified model. To fill the reality gap, finally, we propose a kinematic conversion method that takes 3LP trajectories as input and generates more human-like postures. Using this method, the 3LP model, and the time-projecting controller, we introduce a graphical user interface in the end to simulate periodic and transient human-like walking conditions. We hope to use this combination in future to produce faster and more human-like walking gaits, possibly with more capable humanoid robots
    corecore