1,932 research outputs found
Born to learn: The inspiration, progress, and future of evolved plastic artificial neural networks
Biological plastic neural networks are systems of extraordinary computational
capabilities shaped by evolution, development, and lifetime learning. The
interplay of these elements leads to the emergence of adaptive behavior and
intelligence. Inspired by such intricate natural phenomena, Evolved Plastic
Artificial Neural Networks (EPANNs) use simulated evolution in-silico to breed
plastic neural networks with a large variety of dynamics, architectures, and
plasticity rules: these artificial systems are composed of inputs, outputs, and
plastic components that change in response to experiences in an environment.
These systems may autonomously discover novel adaptive algorithms, and lead to
hypotheses on the emergence of biological adaptation. EPANNs have seen
considerable progress over the last two decades. Current scientific and
technological advances in artificial neural networks are now setting the
conditions for radically new approaches and results. In particular, the
limitations of hand-designed networks could be overcome by more flexible and
innovative solutions. This paper brings together a variety of inspiring ideas
that define the field of EPANNs. The main methods and results are reviewed.
Finally, new opportunities and developments are presented
Short-Term Memory Through Persistent Activity: Evolution of Self-Stopping and Self-Sustaining Activity in Spiking Neural Networks
Memories in the brain are separated in two categories: short-term and
long-term memories. Long-term memories remain for a lifetime, while short-term
ones exist from a few milliseconds to a few minutes. Within short-term memory
studies, there is debate about what neural structure could implement it.
Indeed, mechanisms responsible for long-term memories appear inadequate for the
task. Instead, it has been proposed that short-term memories could be sustained
by the persistent activity of a group of neurons. In this work, we explore what
topology could sustain short-term memories, not by designing a model from
specific hypotheses, but through Darwinian evolution in order to obtain new
insights into its implementation. We evolved 10 networks capable of retaining
information for a fixed duration between 2 and 11s. Our main finding has been
that the evolution naturally created two functional modules in the network: one
which sustains the information containing primarily excitatory neurons, while
the other, which is responsible for forgetting, was composed mainly of
inhibitory neurons. This demonstrates how the balance between inhibition and
excitation plays an important role in cognition.Comment: 28 page
Genetic algorithmic parameter optimisation of a recurrent spiking neural network model
Neural networks are complex algorithms that loosely model the behaviour of
the human brain. They play a significant role in computational neuroscience and
artificial intelligence. The next generation of neural network models is based
on the spike timing activity of neurons: spiking neural networks (SNNs).
However, model parameters in SNNs are difficult to search and optimise.
Previous studies using genetic algorithm (GA) optimisation of SNNs were focused
mainly on simple, feedforward, or oscillatory networks, but not much work has
been done on optimising cortex-like recurrent SNNs. In this work, we
investigated the use of GAs to search for optimal parameters in recurrent SNNs
to reach targeted neuronal population firing rates, e.g. as in experimental
observations. We considered a cortical column based SNN comprising 1000
Izhikevich spiking neurons for computational efficiency and biologically
realism. The model parameters explored were the neuronal biased input currents.
First, we found for this particular SNN, the optimal parameter values for
targeted population averaged firing activities, and the convergence of
algorithm by ~100 generations. We then showed that the GA optimal population
size was within ~16-20 while the crossover rate that returned the best fitness
value was ~0.95. Overall, we have successfully demonstrated the feasibility of
implementing GA to optimise model parameters in a recurrent cortical based SNN.Comment: 6 pages, 6 figure
Scaling of a large-scale simulation of synchronous slow-wave and asynchronous awake-like activity of a cortical model with long-range interconnections
Cortical synapse organization supports a range of dynamic states on multiple
spatial and temporal scales, from synchronous slow wave activity (SWA),
characteristic of deep sleep or anesthesia, to fluctuating, asynchronous
activity during wakefulness (AW). Such dynamic diversity poses a challenge for
producing efficient large-scale simulations that embody realistic metaphors of
short- and long-range synaptic connectivity. In fact, during SWA and AW
different spatial extents of the cortical tissue are active in a given timespan
and at different firing rates, which implies a wide variety of loads of local
computation and communication. A balanced evaluation of simulation performance
and robustness should therefore include tests of a variety of cortical dynamic
states. Here, we demonstrate performance scaling of our proprietary Distributed
and Plastic Spiking Neural Networks (DPSNN) simulation engine in both SWA and
AW for bidimensional grids of neural populations, which reflects the modular
organization of the cortex. We explored networks up to 192x192 modules, each
composed of 1250 integrate-and-fire neurons with spike-frequency adaptation,
and exponentially decaying inter-modular synaptic connectivity with varying
spatial decay constant. For the largest networks the total number of synapses
was over 70 billion. The execution platform included up to 64 dual-socket
nodes, each socket mounting 8 Intel Xeon Haswell processor cores @ 2.40GHz
clock rates. Network initialization time, memory usage, and execution time
showed good scaling performances from 1 to 1024 processes, implemented using
the standard Message Passing Interface (MPI) protocol. We achieved simulation
speeds of between 2.3x10^9 and 4.1x10^9 synaptic events per second for both
cortical states in the explored range of inter-modular interconnections.Comment: 22 pages, 9 figures, 4 table
Scaling of a large-scale simulation of synchronous slow-wave and asynchronous awake-like activity of a cortical model with long-range interconnections
Cortical synapse organization supports a range of dynamic states on multiple
spatial and temporal scales, from synchronous slow wave activity (SWA),
characteristic of deep sleep or anesthesia, to fluctuating, asynchronous
activity during wakefulness (AW). Such dynamic diversity poses a challenge for
producing efficient large-scale simulations that embody realistic metaphors of
short- and long-range synaptic connectivity. In fact, during SWA and AW
different spatial extents of the cortical tissue are active in a given timespan
and at different firing rates, which implies a wide variety of loads of local
computation and communication. A balanced evaluation of simulation performance
and robustness should therefore include tests of a variety of cortical dynamic
states. Here, we demonstrate performance scaling of our proprietary Distributed
and Plastic Spiking Neural Networks (DPSNN) simulation engine in both SWA and
AW for bidimensional grids of neural populations, which reflects the modular
organization of the cortex. We explored networks up to 192x192 modules, each
composed of 1250 integrate-and-fire neurons with spike-frequency adaptation,
and exponentially decaying inter-modular synaptic connectivity with varying
spatial decay constant. For the largest networks the total number of synapses
was over 70 billion. The execution platform included up to 64 dual-socket
nodes, each socket mounting 8 Intel Xeon Haswell processor cores @ 2.40GHz
clock rates. Network initialization time, memory usage, and execution time
showed good scaling performances from 1 to 1024 processes, implemented using
the standard Message Passing Interface (MPI) protocol. We achieved simulation
speeds of between 2.3x10^9 and 4.1x10^9 synaptic events per second for both
cortical states in the explored range of inter-modular interconnections.Comment: 22 pages, 9 figures, 4 table
A laminar organization for selective cortico-cortical communication
The neocortex is central to mammalian cognitive ability, playing critical roles in sensory perception, motor skills and executive function. This thin, layered structure comprises distinct, functionally specialized areas that communicate with each other through the axons of pyramidal neurons. For the hundreds of such cortico-cortical pathways to underlie diverse functions, their cellular and synaptic architectures must differ so that they result in distinct computations at the target projection neurons. In what ways do these pathways differ? By originating and terminating in different laminae, and by selectively targeting specific populations of excitatory and inhibitory neurons, these “interareal” pathways can differentially control the timing and strength of synaptic inputs onto individual neurons, resulting in layer-specific computations. Due to the rapid development in transgenic techniques, the mouse has emerged as a powerful mammalian model for understanding the rules by which cortical circuits organize and function. Here we review our understanding of how cortical lamination constrains long-range communication in the mammalian brain, with an emphasis on the mouse visual cortical network. We discuss the laminar architecture underlying interareal communication, the role of neocortical layers in organizing the balance of excitatory and inhibitory actions, and highlight the structure and function of layer 1 in mouse visual cortex
Can biological quantum networks solve NP-hard problems?
There is a widespread view that the human brain is so complex that it cannot
be efficiently simulated by universal Turing machines. During the last decades
the question has therefore been raised whether we need to consider quantum
effects to explain the imagined cognitive power of a conscious mind.
This paper presents a personal view of several fields of philosophy and
computational neurobiology in an attempt to suggest a realistic picture of how
the brain might work as a basis for perception, consciousness and cognition.
The purpose is to be able to identify and evaluate instances where quantum
effects might play a significant role in cognitive processes.
Not surprisingly, the conclusion is that quantum-enhanced cognition and
intelligence are very unlikely to be found in biological brains. Quantum
effects may certainly influence the functionality of various components and
signalling pathways at the molecular level in the brain network, like ion
ports, synapses, sensors, and enzymes. This might evidently influence the
functionality of some nodes and perhaps even the overall intelligence of the
brain network, but hardly give it any dramatically enhanced functionality. So,
the conclusion is that biological quantum networks can only approximately solve
small instances of NP-hard problems.
On the other hand, artificial intelligence and machine learning implemented
in complex dynamical systems based on genuine quantum networks can certainly be
expected to show enhanced performance and quantum advantage compared with
classical networks. Nevertheless, even quantum networks can only be expected to
efficiently solve NP-hard problems approximately. In the end it is a question
of precision - Nature is approximate.Comment: 38 page
- …