2,561 research outputs found
Multiple chaotic central pattern generators with learning for legged locomotion and malfunction compensation
An originally chaotic system can be controlled into various periodic
dynamics. When it is implemented into a legged robot's locomotion control as a
central pattern generator (CPG), sophisticated gait patterns arise so that the
robot can perform various walking behaviors. However, such a single chaotic CPG
controller has difficulties dealing with leg malfunction. Specifically, in the
scenarios presented here, its movement permanently deviates from the desired
trajectory. To address this problem, we extend the single chaotic CPG to
multiple CPGs with learning. The learning mechanism is based on a simulated
annealing algorithm. In a normal situation, the CPGs synchronize and their
dynamics are identical. With leg malfunction or disability, the CPGs lose
synchronization leading to independent dynamics. In this case, the learning
mechanism is applied to automatically adjust the remaining legs' oscillation
frequencies so that the robot adapts its locomotion to deal with the
malfunction. As a consequence, the trajectory produced by the multiple chaotic
CPGs resembles the original trajectory far better than the one produced by only
a single CPG. The performance of the system is evaluated first in a physical
simulation of a quadruped as well as a hexapod robot and finally in a real
six-legged walking machine called AMOSII. The experimental results presented
here reveal that using multiple CPGs with learning is an effective approach for
adaptive locomotion generation where, for instance, different body parts have
to perform independent movements for malfunction compensation.Comment: 48 pages, 16 figures, Information Sciences 201
Synaptic plasticity in a recurrent neural network for versatile and adaptive behaviors of a walking robot
Walking animals, like insects, with little neural computing can effectively perform complex behaviors. They can walk around their environment, escape from corners/deadlocks, and avoid or climb over obstacles. While performing all these behaviors, they can also adapt their movements to deal with an unknown situation. As a consequence, they successfully navigate through their complex environment. The versatile and adaptive abilities are the result of an integration of several ingredients embedded in their sensorimotor loop. Biological studies reveal that the ingredients include neural dynamics, plasticity, sensory feedback, and biomechanics. Generating such versatile and adaptive behaviors for a walking robot is a challenging task. In this study, we present a bio-inspired approach to solve this task. Specifically, the approach combines neural mechanisms with plasticity, sensory feedback, and biomechanics. The neural mechanisms consist of adaptive neural sensory processing and modular neural locomotion control. The sensory processing is based on a small recurrent network consisting of two fully connected neurons. Online correlation-based learning with synaptic scaling is applied to adequately change the connections of the network. By doing so, we can effectively exploit neural dynamics (i.e., hysteresis effects and single attractors) in the network to generate different turning angles with short-term memory for a biomechanical walking robot. The turning information is transmitted as descending steering signals to the locomotion control which translates the signals into motor actions. As a result, the robot can walk around and adapt its turning angle for avoiding obstacles in different situations as well as escaping from sharp corners or deadlocks. Using backbone joint control embedded in the locomotion control allows the robot to climb over small obstacles. Consequently, it can successfully explore and navigate in complex environments
Robots as Powerful Allies for the Study of Embodied Cognition from the Bottom Up
A large body of compelling evidence has been accumulated demonstrating that embodiment – the agent’s physical setup, including its shape, materials, sensors and actuators – is constitutive for any form of cognition and as a consequence, models of cognition need to be embodied. In contrast to methods from empirical sciences to study cognition, robots can be freely manipulated and virtually all key variables of their embodiment and control programs can be systematically varied. As such, they provide an extremely powerful tool of investigation. We present a robotic bottom-up or developmental approach, focusing on three stages: (a) low-level behaviors like walking and reflexes, (b) learning regularities in sensorimotor spaces, and (c) human-like cognition. We also show that robotic based research is not only a productive path to deepening our understanding of cognition, but that robots can strongly benefit from human-like cognition in order to become more autonomous, robust, resilient, and safe
Bridging Vision and Dynamic Legged Locomotion
Legged robots have demonstrated remarkable advances regarding robustness and versatility in the past decades. The questions that need to be addressed in this field are increasingly focusing on reasoning about the environment and autonomy rather than locomotion only. To answer some of these questions visual information is essential. If a robot has information about the terrain it can plan and take preventive actions against potential risks. However, building a model of the terrain is often computationally costly, mainly because of the dense nature of visual data. On top of the mapping problem, robots need feasible body trajectories and contact sequences to traverse the terrain safely, which may also require heavy computations. This computational cost has limited the use of visual feedback to contexts that guarantee (quasi-) static stability, or resort to planning schemes where contact sequences and body trajectories are computed before starting to execute motions. In this thesis we propose a set of algorithms that reduces the gap between visual processing and dynamic locomotion. We use machine learning to speed up visual data processing and model predictive control to achieve locomotion robustness. In particular, we devise a novel foothold adaptation strategy that uses a map of the terrain built from on-board vision sensors. This map is sent to a foothold classifier based on a convolutional neural network that allows the robot to adjust the landing position of the feet in a fast and continuous fashion. We then use the convolutional neural network-based classifier to provide safe future contact sequences to a model predictive controller that optimizes target ground reaction forces in order to track a desired center of mass trajectory. We perform simulations and experiments on the hydraulic quadruped robots HyQ and HyQReal. For all experiments the contact sequences, the foothold adaptations, the control inputs and the map are computed and processed entirely on-board. The various tests show that the robot is able to leverage the visual terrain information to handle complex scenarios in a safe, robust and reliable manner
In silico case studies of compliant robots: AMARSI deliverable 3.3
In the deliverable 3.2 we presented how the morphological computing ap-
proach can significantly facilitate the control strategy in several scenarios,
e.g. quadruped locomotion, bipedal locomotion and reaching. In particular,
the Kitty experimental platform is an example of the use of morphological
computation to allow quadruped locomotion. In this deliverable we continue
with the simulation studies on the application of the different morphological
computation strategies to control a robotic system
- …