4,528 research outputs found

    Neuro-mechanical entrainment in a bipedal robotic walking platform

    No full text
    In this study, we investigated the use of van der Pol oscillators in a 4-dof embodied bipedal robotic platform for the purposes of planar walking. The oscillator controlled the hip and knee joints of the robot and was capable of generating waveforms with the correct frequency and phase so as to entrain with the mechanical system. Lowering its oscillation frequency resulted in an increase to the walking pace, indicating exploitation of the global natural dynamics. This is verified by its operation in absence of entrainment, where faster limb motion results in a slower overall walking pace

    Neuro-mechanical entrainment in a bipedal robotic walking platform

    No full text
    In this study, we investigated the use of van der Pol oscillators in a 4-dof embodied bipedal robotic platform for the purposes of planar walking. The oscillator controlled the hip and knee joints of the robot and was capable of generating waveforms with the correct frequency and phase so as to entrain with the mechanical system. Lowering its oscillation frequency resulted in an increase to the walking pace, indicating exploitation of the global natural dynamics. This is verified by its operation in absence of entrainment, where faster limb motion results in a slower overall walking pace

    A Practical Fuzzy Controller with Q-learning Approach for the Path Tracking of a Walking-aid Robot

    Get PDF
    [[abstract]]This study tackles the path tracking problem of a prototype walking-aid (WAid) robot which features the human-robot interactive navigation. A practical fuzzy controller is proposed for the path tracking control under reinforcement learning ability. The inputs to the designed fuzzy controller are the error distance and the error angle between the current and the desired position and orientation, respectively. The controller outputs are the voltages applied to the left- and right-wheel motors. A heuristic fuzzy control with the Sugeno-type rules is then designed based on a model-free approach. The consequent part of each fuzzy control rule is designed with the aid of Q-learning approach. The design approach of the controller is presented in detail, and effectiveness of the controller is demonstrated by hardware implementation and experimental results under human-robot interaction environment. The results also show that the proposed path tracking control methods can be easily applied in various wheeled mobile robots.[[conferencetype]]國際[[conferencedate]]20140914~20140917[[booktype]]電子版[[iscallforpapers]]Y[[conferencelocation]]Nagoya, Japa

    VIRTUAL ROBOT LOCOMOTION ON VARIABLE TERRAIN WITH ADVERSARIAL REINFORCEMENT LEARNING

    Get PDF
    Reinforcement Learning (RL) is a machine learning technique where an agent learns to perform a complex action by going through a repeated process of trial and error to maximize a well-defined reward function. This form of learning has found applications in robot locomotion where it has been used to teach robots to traverse complex terrain. While RL algorithms may work well in training robot locomotion, they tend to not generalize well when the agent is brought into an environment that it has never encountered before. Possible solutions from the literature include training a destabilizing adversary alongside the locomotive learning agent. The destabilizing adversary aims to destabilize the agent by applying external forces to it, which may help the locomotive agent learn to deal with unexpected scenarios. For this project, we will train a robust, simulated quadruped robot to traverse a variable terrain. We compare and analyze Proximal Policy Optimization (PPO) with and without the use of an adversarial agent, and determine which use of PPO produces the best results
    corecore