23,754 research outputs found
Kick control: using the attracting states arising within the sensorimotor loop of self-organized robots as motor primitives
Self-organized robots may develop attracting states within the sensorimotor
loop, that is within the phase space of neural activity, body, and
environmental variables. Fixpoints, limit cycles, and chaotic attractors
correspond in this setting to a non-moving robot, to directed, and to irregular
locomotion respectively. Short higher-order control commands may hence be used
to kick the system from one self-organized attractor robustly into the basin of
attraction of a different attractor, a concept termed here as kick control. The
individual sensorimotor states serve in this context as highly compliant motor
primitives.
We study different implementations of kick control for the case of simulated
and real-world wheeled robots, for which the dynamics of the distinct wheels is
generated independently by local feedback loops. The feedback loops are
mediated by rate-encoding neurons disposing exclusively of propriosensoric
inputs in terms of projections of the actual rotational angle of the wheel. The
changes of the neural activity are then transmitted into a rotational motion by
a simulated transmission rod akin to the transmission rods used for steam
locomotives.
We find that the self-organized attractor landscape may be morphed both by
higher-level control signals, in the spirit of kick control, and by interacting
with the environment. Bumping against a wall destroys the limit cycle
corresponding to forward motion, with the consequence that the dynamical
variables are then attracted in phase space by the limit cycle corresponding to
backward moving. The robot, which does not dispose of any distance or contact
sensors, hence reverses direction autonomously.Comment: 17 pages, 9 figure
Attractor Metadynamics in Adapting Neural Networks
Slow adaption processes, like synaptic and intrinsic plasticity, abound in
the brain and shape the landscape for the neural dynamics occurring on
substantially faster timescales. At any given time the network is characterized
by a set of internal parameters, which are adapting continuously, albeit
slowly. This set of parameters defines the number and the location of the
respective adiabatic attractors. The slow evolution of network parameters hence
induces an evolving attractor landscape, a process which we term attractor
metadynamics. We study the nature of the metadynamics of the attractor
landscape for several continuous-time autonomous model networks. We find both
first- and second-order changes in the location of adiabatic attractors and
argue that the study of the continuously evolving attractor landscape
constitutes a powerful tool for understanding the overall development of the
neural dynamics
Generating functionals for computational intelligence: the Fisher information as an objective function for self-limiting Hebbian learning rules
Generating functionals may guide the evolution of a dynamical system and
constitute a possible route for handling the complexity of neural networks as
relevant for computational intelligence. We propose and explore a new objective
function, which allows to obtain plasticity rules for the afferent synaptic
weights. The adaption rules are Hebbian, self-limiting, and result from the
minimization of the Fisher information with respect to the synaptic flux. We
perform a series of simulations examining the behavior of the new learning
rules in various circumstances. The vector of synaptic weights aligns with the
principal direction of input activities, whenever one is present. A linear
discrimination is performed when there are two or more principal directions;
directions having bimodal firing-rate distributions, being characterized by a
negative excess kurtosis, are preferred. We find robust performance and full
homeostatic adaption of the synaptic weights results as a by-product of the
synaptic flux minimization. This self-limiting behavior allows for stable
online learning for arbitrary durations. The neuron acquires new information
when the statistics of input activities is changed at a certain point of the
simulation, showing however, a distinct resilience to unlearn previously
acquired knowledge. Learning is fast when starting with randomly drawn synaptic
weights and substantially slower when the synaptic weights are already fully
adapted
Acetylcholine neuromodulation in normal and abnormal learning and memory: vigilance control in waking, sleep, autism, amnesia, and Alzheimer's disease
This article provides a unified mechanistic neural explanation of how learning, recognition, and cognition break down during Alzheimer's disease, medial temporal amnesia, and autism. It also clarifies whey there are often sleep disturbances during these disorders. A key mechanism is how acetylcholine modules vigilance control in cortical layer
Born to learn: The inspiration, progress, and future of evolved plastic artificial neural networks
Biological plastic neural networks are systems of extraordinary computational
capabilities shaped by evolution, development, and lifetime learning. The
interplay of these elements leads to the emergence of adaptive behavior and
intelligence. Inspired by such intricate natural phenomena, Evolved Plastic
Artificial Neural Networks (EPANNs) use simulated evolution in-silico to breed
plastic neural networks with a large variety of dynamics, architectures, and
plasticity rules: these artificial systems are composed of inputs, outputs, and
plastic components that change in response to experiences in an environment.
These systems may autonomously discover novel adaptive algorithms, and lead to
hypotheses on the emergence of biological adaptation. EPANNs have seen
considerable progress over the last two decades. Current scientific and
technological advances in artificial neural networks are now setting the
conditions for radically new approaches and results. In particular, the
limitations of hand-designed networks could be overcome by more flexible and
innovative solutions. This paper brings together a variety of inspiring ideas
that define the field of EPANNs. The main methods and results are reviewed.
Finally, new opportunities and developments are presented
Multiple chaotic central pattern generators with learning for legged locomotion and malfunction compensation
An originally chaotic system can be controlled into various periodic
dynamics. When it is implemented into a legged robot's locomotion control as a
central pattern generator (CPG), sophisticated gait patterns arise so that the
robot can perform various walking behaviors. However, such a single chaotic CPG
controller has difficulties dealing with leg malfunction. Specifically, in the
scenarios presented here, its movement permanently deviates from the desired
trajectory. To address this problem, we extend the single chaotic CPG to
multiple CPGs with learning. The learning mechanism is based on a simulated
annealing algorithm. In a normal situation, the CPGs synchronize and their
dynamics are identical. With leg malfunction or disability, the CPGs lose
synchronization leading to independent dynamics. In this case, the learning
mechanism is applied to automatically adjust the remaining legs' oscillation
frequencies so that the robot adapts its locomotion to deal with the
malfunction. As a consequence, the trajectory produced by the multiple chaotic
CPGs resembles the original trajectory far better than the one produced by only
a single CPG. The performance of the system is evaluated first in a physical
simulation of a quadruped as well as a hexapod robot and finally in a real
six-legged walking machine called AMOSII. The experimental results presented
here reveal that using multiple CPGs with learning is an effective approach for
adaptive locomotion generation where, for instance, different body parts have
to perform independent movements for malfunction compensation.Comment: 48 pages, 16 figures, Information Sciences 201
- …