417 research outputs found

    Motion Planning and Control of Dynamic Humanoid Locomotion

    Get PDF
    Inspired by human, humanoid robots has the potential to become a general-purpose platform that lives along with human. Due to the technological advances in many field, such as actuation, sensing, control and intelligence, it finally enables humanoid robots to possess human comparable capabilities. However, humanoid locomotion is still a challenging research field. The large number of degree of freedom structure makes the system difficult to coordinate online. The presence of various contact constraints and the hybrid nature of locomotion tasks make the planning a harder problem to solve. Template model anchoring approach has been adopted to bridge the gap between simple model behavior and the whole-body motion of humanoid robot. Control policies are first developed for simple template models like Linear Inverted Pendulum Model (LIPM) or Spring Loaded Inverted Pendulum(SLIP), the result controlled behaviors are then been mapped to the whole-body motion of humanoid robot through optimization-based task-space control strategies. Whole-body humanoid control framework has been verified on various contact situations such as unknown uneven terrain, multi-contact scenarios and moving platform and shows its generality and versatility. For walking motion, existing Model Predictive Control approach based on LIPM has been extended to enable the robot to walk without any reference foot placement anchoring. It is kind of discrete version of \u201cwalking without thinking\u201d. As a result, the robot could achieve versatile locomotion modes such as automatic foot placement with single reference velocity command, reactive stepping under large external disturbances, guided walking with small constant external pushing forces, robust walking on unknown uneven terrain, reactive stepping in place when blocked by external barrier. As an extension of this proposed framework, also to increase the push recovery capability of the humanoid robot, two new configurations have been proposed to enable the robot to perform cross-step motions. For more dynamic hopping and running motion, SLIP model has been chosen as the template model. Different from traditional model-based analytical approach, a data-driven approach has been proposed to encode the dynamics of the this model. A deep neural network is trained offline with a large amount of simulation data based on the SLIP model to learn its dynamics. The trained network is applied online to generate reference foot placements for the humanoid robot. Simulations have been performed to evaluate the effectiveness of the proposed approach in generating bio-inspired and robust running motions. The method proposed based on 2D SLIP model can be generalized to 3D SLIP model and the extension has been briefly mentioned at the end

    Genetically evolved dynamic control for quadruped walking

    Get PDF
    The aim of this dissertation is to show that dynamic control of quadruped locomotion is achievable through the use of genetically evolved central pattern generators. This strategy is tested both in simulation and on a walking robot. The design of the walker has been chosen to be statically unstable, so that during motion less than three supporting feet may be in contact with the ground. The control strategy adopted is capable of propelling the artificial walker at a forward locomotion speed of ~1.5 Km/h on rugged terrain and provides for stability of motion. The learning of walking, based on simulated genetic evolution, is carried out in simulation to speed up the process and reduce the amount of damage to the hardware of the walking robot. For this reason a general-purpose fast dynamic simulator has been developed, able to efficiently compute the forward dynamics of tree-like robotic mechanisms. An optimization process to select stable walking patterns is implemented through a purposely designed genetic algorithm, which implements stochastic mutation and cross-over operators. The algorithm has been tailored to address the high cost of evaluation of the optimization function, as well as the characteristics of the parameter space chosen to represent controllers. Experiments carried out on different conditions give clear indications on the potential of the approach adopted. A proof of concept is achieved, that stable dynamic walking can be obtained through a search process which identifies attractors in the dynamics of the motor-control system of an artificial walker

    The implications of embodiment for behavior and cognition: animal and robotic case studies

    Full text link
    In this paper, we will argue that if we want to understand the function of the brain (or the control in the case of robots), we must understand how the brain is embedded into the physical system, and how the organism interacts with the real world. While embodiment has often been used in its trivial meaning, i.e. 'intelligence requires a body', the concept has deeper and more important implications, concerned with the relation between physical and information (neural, control) processes. A number of case studies are presented to illustrate the concept. These involve animals and robots and are concentrated around locomotion, grasping, and visual perception. A theoretical scheme that can be used to embed the diverse case studies will be presented. Finally, we will establish a link between the low-level sensory-motor processes and cognition. We will present an embodied view on categorization, and propose the concepts of 'body schema' and 'forward models' as a natural extension of the embodied approach toward first representations.Comment: Book chapter in W. Tschacher & C. Bergomi, ed., 'The Implications of Embodiment: Cognition and Communication', Exeter: Imprint Academic, pp. 31-5

    Adaptive, fast walking in a biped robot under neuronal control and learning

    Get PDF
    Human walking is a dynamic, partly self-stabilizing process relying on the interaction of the biomechanical design with its neuronal control. The coordination of this process is a very difficult problem, and it has been suggested that it involves a hierarchy of levels, where the lower ones, e.g., interactions between muscles and the spinal cord, are largely autonomous, and where higher level control (e.g., cortical) arises only pointwise, as needed. This requires an architecture of several nested, sensori–motor loops where the walking process provides feedback signals to the walker's sensory systems, which can be used to coordinate its movements. To complicate the situation, at a maximal walking speed of more than four leg-lengths per second, the cycle period available to coordinate all these loops is rather short. In this study we present a planar biped robot, which uses the design principle of nested loops to combine the self-stabilizing properties of its biomechanical design with several levels of neuronal control. Specifically, we show how to adapt control by including online learning mechanisms based on simulated synaptic plasticity. This robot can walk with a high speed (> 3.0 leg length/s), self-adapting to minor disturbances, and reacting in a robust way to abruptly induced gait changes. At the same time, it can learn walking on different terrains, requiring only few learning experiences. This study shows that the tight coupling of physical with neuronal control, guided by sensory feedback from the walking pattern itself, combined with synaptic learning may be a way forward to better understand and solve coordination problems in other complex motor tasks

    In silico case studies of compliant robots: AMARSI deliverable 3.3

    Get PDF
    In the deliverable 3.2 we presented how the morphological computing ap- proach can significantly facilitate the control strategy in several scenarios, e.g. quadruped locomotion, bipedal locomotion and reaching. In particular, the Kitty experimental platform is an example of the use of morphological computation to allow quadruped locomotion. In this deliverable we continue with the simulation studies on the application of the different morphological computation strategies to control a robotic system

    Learning Terrain Dynamics: A Gaussian Process Modeling and Optimal Control Adaptation Framework Applied to Robotic Jumping

    Get PDF
    The complex dynamics characterizing deformable terrain presents significant impediments toward the real-world viability of locomotive robotics, particularly for legged machines. We explore vertical, robotic jumping as a model task for legged locomotion on presumed-uncharacterized, nonrigid terrain. By integrating Gaussian process (GP)-based regression and evaluation to estimate ground reaction forces as a function of the state, a 1-D jumper acquires the capability to learn forcing profiles exerted by its environment in tandem with achieving its control objective. The GP-based dynamical model initially assumes a baseline rigid, noncompliant surface. As part of an iterative procedure, the optimizer employing this model generates an optimal control strategy to achieve a target jump height. Experiential data recovered from execution on the true surface model are applied to train the GP, in turn, providing the optimizer a more richly informed dynamical model of the environment. The iterative control-learning procedure was rigorously evaluated in experiment, over different surface types, whereby a robotic hopper was challenged to jump to several different target heights. Each task was achieved within ten attempts, over which the terrain's dynamics were learned. With each iteration, GP predictions of ground forcing became incrementally refined, rapidly matching experimental force measurements. The few-iteration convergence demonstrates a fundamental capacity to both estimate and adapt to unknown terrain dynamics in application-realistic time scales, all with control tools amenable to robotic legged locomotion

    Development of track-driven agriculture robot with terrain classification functionality / Khairul Azmi Mahadhir

    Get PDF
    Over the past years, many robots have been devised to facilitate agricultural activities (that are labor-intensive in nature) so that they can carry out tasks such as crop care or selective harvesting with minimum human supervision. It is commonly observed that rapid change in terrain conditions can jeopardize the performance and efficiency of a robot when performing agricultural activity. For instance, a terrain covered with gravel produces high vibration to robot when traversing on the surface. In this work, an agricultural robot is embedded with machine learning algorithm based on Support Vector Machine (SVM). The aim is to evaluate the effectiveness of the Support Vector Machine in recognizing different terrain conditions in an agriculture field. A test bed equipped with a tracked-driven robot and three types o f terrain i.e. sand, gravel and vegetation has been developed. A small and low power MEMS accelerometer is integrated into the robot for measuring the vertical acceleration. In this experiment, the vibration signals resulted from the interaction between the robot and the different type of terrain were collected. An extensive experimental study was conducted to evaluate the effectiveness of SVM. The results in terms of accuracy of two machine learning techniques based on terrain classification are analyzed and compared. The results show that the robot that is equipped with an SVM can recognize different terrain conditions effectively. Such capability enables the robot to traverse across changing terrain conditions without being trapped in the field. Hence, this research work contributes to develop a self-adaptive agricultural robot in coping with different terrain conditions with minimum human supervision
    corecore