8 research outputs found
Kick control: using the attracting states arising within the sensorimotor loop of self-organized robots as motor primitives
Self-organized robots may develop attracting states within the sensorimotor
loop, that is within the phase space of neural activity, body, and
environmental variables. Fixpoints, limit cycles, and chaotic attractors
correspond in this setting to a non-moving robot, to directed, and to irregular
locomotion respectively. Short higher-order control commands may hence be used
to kick the system from one self-organized attractor robustly into the basin of
attraction of a different attractor, a concept termed here as kick control. The
individual sensorimotor states serve in this context as highly compliant motor
primitives.
We study different implementations of kick control for the case of simulated
and real-world wheeled robots, for which the dynamics of the distinct wheels is
generated independently by local feedback loops. The feedback loops are
mediated by rate-encoding neurons disposing exclusively of propriosensoric
inputs in terms of projections of the actual rotational angle of the wheel. The
changes of the neural activity are then transmitted into a rotational motion by
a simulated transmission rod akin to the transmission rods used for steam
locomotives.
We find that the self-organized attractor landscape may be morphed both by
higher-level control signals, in the spirit of kick control, and by interacting
with the environment. Bumping against a wall destroys the limit cycle
corresponding to forward motion, with the consequence that the dynamical
variables are then attracted in phase space by the limit cycle corresponding to
backward moving. The robot, which does not dispose of any distance or contact
sensors, hence reverses direction autonomously.Comment: 17 pages, 9 figure
Synaptic plasticity in a recurrent neural network for versatile and adaptive behaviors of a walking robot
Walking animals, like insects, with little neural computing can effectively perform complex behaviors. They can walk around their environment, escape from corners/deadlocks, and avoid or climb over obstacles. While performing all these behaviors, they can also adapt their movements to deal with an unknown situation. As a consequence, they successfully navigate through their complex environment. The versatile and adaptive abilities are the result of an integration of several ingredients embedded in their sensorimotor loop. Biological studies reveal that the ingredients include neural dynamics, plasticity, sensory feedback, and biomechanics. Generating such versatile and adaptive behaviors for a walking robot is a challenging task. In this study, we present a bio-inspired approach to solve this task. Specifically, the approach combines neural mechanisms with plasticity, sensory feedback, and biomechanics. The neural mechanisms consist of adaptive neural sensory processing and modular neural locomotion control. The sensory processing is based on a small recurrent network consisting of two fully connected neurons. Online correlation-based learning with synaptic scaling is applied to adequately change the connections of the network. By doing so, we can effectively exploit neural dynamics (i.e., hysteresis effects and single attractors) in the network to generate different turning angles with short-term memory for a biomechanical walking robot. The turning information is transmitted as descending steering signals to the locomotion control which translates the signals into motor actions. As a result, the robot can walk around and adapt its turning angle for avoiding obstacles in different situations as well as escaping from sharp corners or deadlocks. Using backbone joint control embedded in the locomotion control allows the robot to climb over small obstacles. Consequently, it can successfully explore and navigate in complex environments
Mean field modelling of human EEG: application to epilepsy
Aggregated electrical activity from brain regions recorded via an electroencephalogram (EEG),
reveal that the brain is never at rest, producing a spectrum of ongoing oscillations that
change as a result of different behavioural states and neurological conditions. In particular,
this thesis focusses on pathological oscillations associated with absence seizures that typically
affect 2–16 year old children. Investigation of the cellular and network mechanisms for absence
seizures studies have implicated an abnormality in the cortical and thalamic activity in the
generation of absence seizures, which have provided much insight to the potential cause of this
disease. A number of competing hypotheses have been suggested, however the precise cause
has yet to be determined. This work attempts to provide an explanation of these abnormal
rhythms by considering a physiologically based, macroscopic continuum mean-field model of
the brain's electrical activity. The methodology taken in this thesis is to assume that many
of the physiological details of the involved brain structures can be aggregated into continuum
state variables and parameters. The methodology has the advantage to indirectly encapsulate
into state variables and parameters, many known physiological mechanisms underlying the
genesis of epilepsy, which permits a reduction of the complexity of the problem. That is, a
macroscopic description of the involved brain structures involved in epilepsy is taken and then
by scanning the parameters of the model, identification of state changes in the system are
made possible. Thus, this work demonstrates how changes in brain state as determined in
EEG can be understood via dynamical state changes in the model providing an explanation
of absence seizures. Furthermore, key observations from both the model and EEG data
motivates a number of model reductions. These reductions provide approximate solutions of
seizure oscillations and a better understanding of periodic oscillations arising from the involved
brain regions. Local analysis of oscillations are performed by employing dynamical systems
theory which provide necessary and sufficient conditions for their appearance. Finally local
and global stability is then proved for the reduced model, for a reduced region in the parameter
space. The results obtained in this thesis can be extended and suggestions are provided for
future progress in this area
Advances in Reinforcement Learning
Reinforcement Learning (RL) is a very dynamic area in terms of theory and application. This book brings together many different aspects of the current research on several fields associated to RL which has been growing rapidly, producing a wide variety of learning algorithms for different applications. Based on 24 Chapters, it covers a very broad variety of topics in RL and their application in autonomous systems. A set of chapters in this book provide a general overview of RL while other chapters focus mostly on the applications of RL paradigms: Game Theory, Multi-Agent Theory, Robotic, Networking Technologies, Vehicular Navigation, Medicine and Industrial Logistic