221 research outputs found

    Mechanism and Control of Anthropomorphic Biped Robots

    Get PDF

    Study on a bipedal walking robot that adapts to real-world obstacles and changing terrains

    Get PDF
    制度:新 ; 報告番号:甲3056号 ; 学位の種類:博士(工学) ; 授与年月日:2010/3/15 ; 早大学位記番号:新531

    Push recovery with stepping strategy based on time-projection control

    Get PDF
    In this paper, we present a simple control framework for on-line push recovery with dynamic stepping properties. Due to relatively heavy legs in our robot, we need to take swing dynamics into account and thus use a linear model called 3LP which is composed of three pendulums to simulate swing and torso dynamics. Based on 3LP equations, we formulate discrete LQR controllers and use a particular time-projection method to adjust the next footstep location on-line during the motion continuously. This adjustment, which is found based on both pelvis and swing foot tracking errors, naturally takes the swing dynamics into account. Suggested adjustments are added to the Cartesian 3LP gaits and converted to joint-space trajectories through inverse kinematics. Fixed and adaptive foot lift strategies also ensure enough ground clearance in perturbed walking conditions. The proposed structure is robust, yet uses very simple state estimation and basic position tracking. We rely on the physical series elastic actuators to absorb impacts while introducing simple laws to compensate their tracking bias. Extensive experiments demonstrate the functionality of different control blocks and prove the effectiveness of time-projection in extreme push recovery scenarios. We also show self-produced and emergent walking gaits when the robot is subject to continuous dragging forces. These gaits feature dynamic walking robustness due to relatively soft springs in the ankles and avoiding any Zero Moment Point (ZMP) control in our proposed architecture.Comment: 20 pages journal pape

    Adaptive, fast walking in a biped robot under neuronal control and learning

    Get PDF
    Human walking is a dynamic, partly self-stabilizing process relying on the interaction of the biomechanical design with its neuronal control. The coordination of this process is a very difficult problem, and it has been suggested that it involves a hierarchy of levels, where the lower ones, e.g., interactions between muscles and the spinal cord, are largely autonomous, and where higher level control (e.g., cortical) arises only pointwise, as needed. This requires an architecture of several nested, sensori–motor loops where the walking process provides feedback signals to the walker's sensory systems, which can be used to coordinate its movements. To complicate the situation, at a maximal walking speed of more than four leg-lengths per second, the cycle period available to coordinate all these loops is rather short. In this study we present a planar biped robot, which uses the design principle of nested loops to combine the self-stabilizing properties of its biomechanical design with several levels of neuronal control. Specifically, we show how to adapt control by including online learning mechanisms based on simulated synaptic plasticity. This robot can walk with a high speed (> 3.0 leg length/s), self-adapting to minor disturbances, and reacting in a robust way to abruptly induced gait changes. At the same time, it can learn walking on different terrains, requiring only few learning experiences. This study shows that the tight coupling of physical with neuronal control, guided by sensory feedback from the walking pattern itself, combined with synaptic learning may be a way forward to better understand and solve coordination problems in other complex motor tasks

    Learning hybrid locomotion skills—Learn to exploit residual actions and modulate model-based gait control

    Get PDF
    This work has developed a hybrid framework that combines machine learning and control approaches for legged robots to achieve new capabilities of balancing against external perturbations. The framework embeds a kernel which is a model-based, full parametric closed-loop and analytical controller as the gait pattern generator. On top of that, a neural network with symmetric partial data augmentation learns to automatically adjust the parameters for the gait kernel, and also generate compensatory actions for all joints, thus significantly augmenting the stability under unexpected perturbations. Seven Neural Network policies with different configurations were optimized to validate the effectiveness and the combined use of the modulation of the kernel parameters and the compensation for the arms and legs using residual actions. The results validated that modulating kernel parameters alongside the residual actions have improved the stability significantly. Furthermore, The performance of the proposed framework was evaluated across a set of challenging simulated scenarios, and demonstrated considerable improvements compared to the baseline in recovering from large external forces (up to 118%). Besides, regarding measurement noise and model inaccuracies, the robustness of the proposed framework has been assessed through simulations, which demonstrated the robustness in the presence of these uncertainties. Furthermore, the trained policies were validated across a set of unseen scenarios and showed the generalization to dynamic walking

    Walking Pattern and Compensatory Body Motion of Biped Humanoid Robot

    Get PDF
    This paper presents a walking pattern generation method for biped walking. There are three walking phases such as a double support, a swing and a contact phase. In the swing phase, a leg motion pattern is produced by using a six order polynomial, while a leg motion pattern is generated by using a quintic polynomial in the contact and double support phase. When a biped humanoid robot dynamically walks on the ground, moments are produced by the motion of the lower-limbs. So, a moment compensation method is also discussed in this paper. Based on the motion of the lower-limbs and ZMP (Zero Moment Point), the motion of the trunk and the waist is calculated to cancel the moments. Through simulation, the effectiveness of the moment compensation methods is verified

    Fast Damage Recovery in Robotics with the T-Resilience Algorithm

    Full text link
    Damage recovery is critical for autonomous robots that need to operate for a long time without assistance. Most current methods are complex and costly because they require anticipating each potential damage in order to have a contingency plan ready. As an alternative, we introduce the T-resilience algorithm, a new algorithm that allows robots to quickly and autonomously discover compensatory behaviors in unanticipated situations. This algorithm equips the robot with a self-model and discovers new behaviors by learning to avoid those that perform differently in the self-model and in reality. Our algorithm thus does not identify the damaged parts but it implicitly searches for efficient behaviors that do not use them. We evaluate the T-Resilience algorithm on a hexapod robot that needs to adapt to leg removal, broken legs and motor failures; we compare it to stochastic local search, policy gradient and the self-modeling algorithm proposed by Bongard et al. The behavior of the robot is assessed on-board thanks to a RGB-D sensor and a SLAM algorithm. Using only 25 tests on the robot and an overall running time of 20 minutes, T-Resilience consistently leads to substantially better results than the other approaches

    Mechanical engineering challenges in humanoid robotics

    Get PDF
    Thesis (S.B.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 2011.Cataloged from PDF version of thesis.Includes bibliographical references (p. 36-39).Humanoid robots are artificial constructs designed to emulate the human body in form and function. They are a unique class of robots whose anthropomorphic nature renders them particularly well-suited to interact with humans in a world designed for humans. The present work examines a subset of the plethora of engineering challenges that face modem developers of humanoid robots, with a focus on challenges that fall within the domain of mechanical engineering. The challenge of emulating human bipedal locomotion on a robotic platform is reviewed in the context of the evolutionary origins of human bipedalism and the biomechanics of walking and running. Precise joint angle control bipedal robots and passive-dynamic walkers, the two most prominent classes of modem bipedal robots, are found to have their own strengths and shortcomings. An integration of the strengths from both classes is likely to characterize the next generation of humanoid robots. The challenge of replicating human arm and hand dexterity with a robotic system is reviewed in the context of the evolutionary origins and kinematic structure of human forelimbs. Form-focused design and function-focused design, two distinct approaches to the design of modem robotic arms and hands, are found to have their own strengths and shortcomings. An integration of the strengths from both approaches is likely to characterize the next generation of humanoid robots.by Peter Guang Yi Lu.S.B

    Fast biped walking with a neuronal controller and physical computation

    Get PDF
    Biped walking remains a difficult problem and robot models can greatly {facilitate} our understanding of the underlying biomechanical principles as well as their neuronal control. The goal of this study is to specifically demonstrate that stable biped walking can be achieved by combining the physical properties of the walking robot with a small, reflex-based neuronal network, which is governed mainly by local sensor signals. This study shows that human-like gaits emerge without {specific} position or trajectory control and that the walker is able to compensate small disturbances through its own dynamical properties. The reflexive controller used here has the following characteristics, which are different from earlier approaches: (1) Control is mainly local. Hence, it uses only two signals (AEA=Anterior Extreme Angle and GC=Ground Contact) which operate at the inter-joint level. All other signals operate only at single joints. (2) Neither position control nor trajectory tracking control is used. Instead, the approximate nature of the local reflexes on each joint allows the robot mechanics itself (e.g., its passive dynamics) to contribute substantially to the overall gait trajectory computation. (3) The motor control scheme used in the local reflexes of our robot is more straightforward and has more biological plausibility than that of other robots, because the outputs of the motorneurons in our reflexive controller are directly driving the motors of the joints, rather than working as references for position or velocity control. As a consequence, the neural controller and the robot mechanics are closely coupled as a neuro-mechanical system and this study emphasises that dynamically stable biped walking gaits emerge from the coupling between neural computation and physical computation. This is demonstrated by different walking experiments using two real robot as well as by a Poincar\'{e} map analysis applied on a model of the robot in order to assess its stability. In addition, this neuronal control structure allows the use of a policy gradient reinforcement learning algorithm to tune the parameters of the neurons in real-time, during walking. This way the robot can reach a record-breaking walking speed of 3.5 leg-lengths per second after only a few minutes of online learning, which is even comparable to the fastest relative speed of human walking

    A comprehensive gaze stabilization controller based on cerebellar internal models

    Get PDF
    Gaze stabilization is essential for clear vision; it is the combined effect of two reflexes relying on vestibular inputs: the vestibulocollic reflex (VCR), which stabilizes the head in space and the vestibulo-ocular reflex (VOR), which stabilizes the visual axis to minimize retinal image motion. The VOR works in conjunction with the opto-kinetic reflex (OKR), which is a visual feedback mechanism that allows the eye to move at the same speed as the observed scene. Together they keep the image stationary on the retina. In this work, we implement on a humanoid robot a model of gaze stabilization based on the coordination of VCR, VOR and OKR. The model, inspired by neuroscientific cerebellar theories, is provided with learning and adaptation capabilities based on internal models. We present the results for the gaze stabilization model on three sets of experiments conducted on the SABIAN robot and on the iCub simulator, validating the robustness of the proposed control method. The first set of experiments focused on the controller response to a set of disturbance frequencies along the vertical plane. The second shows the performances of the system under three-dimensional disturbances. The last set of experiments was carried out to test the capability of the proposed model to stabilize the gaze in locomotion tasks. The results confirm that the proposed model is beneficial in all cases reducing the retinal slip (velocity of the image on the retina) and keeping the orientation of the head stable
    corecore