7 research outputs found
Evolving spiking neural networks for temporal pattern recognition in the presence of noise
Creative Commons - Attribution-NonCommercial-NoDerivs 3.0 United StatesNervous systems of biological organisms use temporal patterns of spikes to encode sensory input, but the mechanisms that underlie the recognition of such patterns are unclear. In the present work, we explore how networks of spiking neurons can be evolved to recognize temporal input patterns without being able to adjust signal conduction delays. We evolve the networks with GReaNs, an artificial life platform that encodes the topology of the network (and the weights of connections) in a fashion inspired by the encoding of gene regulatory networks in biological genomes. The number of computational nodes or connections is not limited in GReaNs, but here we limit the size of the networks to analyze the functioning of the networks and the effect of network size on the evolvability of robustness to noise. Our results show that even very small networks of spiking neurons can perform temporal pattern recognition in the presence of input noiseFinal Published versio
Evolving small spiking neural networks to work as state machines for temporal pattern recognition
Peer reviewe
Evolution of Spiking Neural Networks for Temporal Pattern Recognition and Animat Control
I extended an artificial life platform called GReaNs (the name stands for Gene Regulatory evolving artificial Networks) to explore the evolutionary abilities of biologically inspired Spiking Neural Network (SNN) model. The encoding of SNNs in GReaNs was inspired by the encoding of gene regulatory networks.
As proof-of-principle, I used GReaNs to evolve SNNs to obtain a network with an output neuron which generates a predefined spike train in response to a specific input. Temporal pattern recognition was one of the main tasks during my studies. It is widely believed that nervous systems of biological organisms use temporal patterns of inputs to encode information. The learning technique used for temporal pattern recognition is not clear yet. I studied the ability to evolve spiking networks with different numbers of interneurons in the absence and the presence of noise to recognize predefined temporal patterns of inputs. Results showed, that in the presence of noise, it was possible to evolve successful networks. However, the networks with only one interneuron were not robust to noise.
The foraging behaviour of many small animals depends mainly on their olfactory system. I explored whether it was possible to evolve SNNs able to control an agent to find food particles on 2-dimensional maps. Using ring rate encoding to encode the sensory information in the olfactory input neurons, I managed to obtain SNNs able to control an agent that could detect the position of the food particles and move toward it.
Furthermore, I did unsuccessful attempts to use GReaNs to evolve an SNN able to control an agent able to collect sound sources from one type out of several sound types. Each sound type is represented as a pattern of different frequencies. In order to use the computational power of neuromorphic hardware, I integrated GReaNs with the SpiNNaker hardware system. Only the simulation part was carried out using SpiNNaker, but the rest steps of the genetic algorithm were done with GReaNs
The Evolution, Analysis, and Design of Minimal Spiking Neural Networks for Temporal Pattern Recognition
All sensory stimuli are temporal in structure. How a pattern of action potentials
encodes the information received from the sensory stimuli is an important research
question in neurosciencce. Although it is clear that information is carried by the
number or the timing of spikes, the information processing in the nervous system is
poorly understood. The desire to understand information processing in the animal
brain led to the development of spiking neural networks (SNNs). Understanding
information processing in spiking neural networks may give us an insight into the
information processing in the animal brain. One way to understand the mechanisms
which enable SNNs to perform a computational task is to associate the structural
connectivity of the network with the corresponding functional behaviour. This work
demonstrates the structure-function mapping of spiking networks evolved (or handcrafted)
for recognising temporal patterns. The SNNs are composed of simple yet biologically
meaningful adaptive exponential integrate-and-fire (AdEx) neurons. The
computational task can be described as identifying a subsequence of three signals
(say ABC) in a random input stream of signals ("ABBBCCBABABCBBCAC").
The topology and connection weights of the networks are optimised using a genetic
algorithm such that the network output spikes only for the correct input pattern
and remains silent for all others. The fitness function rewards the network output
for spiking after receiving the correct pattern and penalises spikes elsewhere.
To analyse the effect of noise, two types of noise are introduced during evolution: (i)
random fluctuations of the membrane potential of neurons in the network at every
network step, (ii) random variations of the duration of the silent interval between
input signals. It has been observed that evolution in the presence of noise produced
networks that were robust to perturbation of neuronal parameters. Moreover, the
networks also developed a form of memory, enabling them to maintain network
states in the absence of input activity. It has been demonstrated that the network
states of an evolved network have a one-to-one correspondence with the states of
a finite-state transducer (FST) { a model of computation for time-structured data.
The analysis of networks indicated that the task of recognition is accomplished by
transitions between network states.
Evolution may overproduce synaptic connections, pruning these superfluous connections
pronounced structural similarities among individuals obtained from different
independent runs. Moreover, the analysis of the pruned networks highlighted that
memory is a property of self-excitation in the network. Neurons with self-excitatory
loops (also called autapses) could sustain spiking activity indefinitely in the absence
of input activity. To recognise a pattern of length n, a network requires n+1 network
states, where n states are maintained actively with autapses and the penultimate
state is maintained passively by no activity in the network. Simultaneously, the role
of other connections in the network is identified.
Of particular interest, three interneurons in the network are found to have a specialized
role: (i) the lock neuron is always active, preventing the output from spiking
unless it is released by the penultimate signal in the correct pattern, exposing the
output neuron to spike for the correct last signal, (ii) the switch neuron is responsible
for switching the network between the inter-signal states and the start state, and (iii)
the accept neuron produces spikes in the output neuron when the network receives
the last correct input. It also sends a signal to the switch neuron, transforming the
network back into the start state
Understanding how information is processed in the evolved networks led to handcrafting
network topologies for recognising more extended patterns. The proposed
rules can extend network topologies to recognize temporal patterns up to length six.
To validate the handcrafted topology, a genetic algorithm is used to optimise its connection
weights. It has been observed that the maximum number of active neurons
representing a state in the network increases with the pattern length. Therefore, the
suggested rules can handcraft network topologies only up to length 6. Handcrafting
network topologies, representing a network state with a fixed number of active
neurons requires further investigation
Using MapReduce Streaming for Distributed Life Simulation on the Cloud
Distributed software simulations are indispensable in the study of large-scale life models but often require the use of technically complex lower-level distributed computing frameworks, such as MPI. We propose to overcome the complexity challenge by applying the emerging MapReduce (MR) model to distributed life simulations and by running such simulations on the cloud. Technically, we design optimized MR streaming algorithms for discrete and continuous versions of Conway’s life according to a general MR streaming pattern. We chose life because it is simple enough as a testbed for MR’s applicability to a-life simulations and general enough to make our results applicable to various lattice-based a-life models. We implement and empirically evaluate our algorithms’ performance on Amazon’s Elastic MR cloud. Our experiments demonstrate that a single MR optimization technique called strip partitioning can reduce the execution time of continuous life simulations by 64%. To the best of our knowledge, we are the first to propose and evaluate MR streaming algorithms for lattice-based simulations. Our algorithms can serve as prototypes in the development of novel MR simulation algorithms for large-scale lattice-based a-life models.https://digitalcommons.chapman.edu/scs_books/1014/thumbnail.jp