75 research outputs found
Rapid, parallel path planning by propagating wavefronts of spiking neural activity
Efficient path planning and navigation is critical for animals, robotics,
logistics and transportation. We study a model in which spatial navigation
problems can rapidly be solved in the brain by parallel mental exploration of
alternative routes using propagating waves of neural activity. A wave of
spiking activity propagates through a hippocampus-like network, altering the
synaptic connectivity. The resulting vector field of synaptic change then
guides a simulated animal to the appropriate selected target locations. We
demonstrate that the navigation problem can be solved using realistic, local
synaptic plasticity rules during a single passage of a wavefront. Our model can
find optimal solutions for competing possible targets or learn and navigate in
multiple environments. The model provides a hypothesis on the possible
computational mechanisms for optimal path planning in the brain, at the same
time it is useful for neuromorphic implementations, where the parallelism of
information processing proposed here can fully be harnessed in hardware
NatCSNN: A Convolutional Spiking Neural Network for recognition of objects extracted from natural images
Biological image processing is performed by complex neural networks composed
of thousands of neurons interconnected via thousands of synapses, some of which
are excitatory and others inhibitory. Spiking neural models are distinguished
from classical neurons by being biological plausible and exhibiting the same
dynamics as those observed in biological neurons. This paper proposes a Natural
Convolutional Neural Network (NatCSNN) which is a 3-layer bio-inspired
Convolutional Spiking Neural Network (CSNN), for classifying objects extracted
from natural images. A two-stage training algorithm is proposed using
unsupervised Spike Timing Dependent Plasticity (STDP) learning (phase 1) and
ReSuMe supervised learning (phase 2). The NatCSNN was trained and tested on the
CIFAR-10 dataset and achieved an average testing accuracy of 84.7% which is an
improvement over the 2-layer neural networks previously applied to this
dataset.Comment: 12 page
Replay as wavefronts and theta sequences as bump oscillations in a grid cell attractor network.
Grid cells fire in sequences that represent rapid trajectories in space. During locomotion, theta sequences encode sweeps in position starting slightly behind the animal and ending ahead of it. During quiescence and slow wave sleep, bouts of synchronized activity represent long trajectories called replays, which are well-established in place cells and have been recently reported in grid cells. Theta sequences and replay are hypothesized to facilitate many cognitive functions, but their underlying mechanisms are unknown. One mechanism proposed for grid cell formation is the continuous attractor network. We demonstrate that this established architecture naturally produces theta sequences and replay as distinct consequences of modulating external input. Driving inhibitory interneurons at the theta frequency causes attractor bumps to oscillate in speed and size, which gives rise to theta sequences and phase precession, respectively. Decreasing input drive to all neurons produces traveling wavefronts of activity that are decoded as replays
Computational modeling with spiking neural networks
This chapter reviews recent developments in the area of spiking neural networks (SNN) and summarizes the main contributions to this research field. We give background information about the functioning of biological neurons, discuss the most important mathematical neural models along with neural encoding techniques, learning algorithms, and applications of spiking neurons. As a specific application, the functioning of the evolving spiking neural network (eSNN) classification method is presented in detail and the principles of numerous eSNN based applications are highlighted and discussed
A review of learning in biologically plausible spiking neural networks
Artificial neural networks have been used as a powerful processing tool in various areas such as pattern recognition, control, robotics, and bioinformatics. Their wide applicability has encouraged researchers to improve artificial neural networks by investigating the biological brain. Neurological research has significantly progressed in recent years and continues to reveal new characteristics of biological neurons. New technologies can now capture temporal changes in the internal activity of the brain in more detail and help clarify the relationship between brain activity and the perception of a given stimulus. This new knowledge has led to a new type of artificial neural network, the Spiking Neural Network (SNN), that draws more faithfully on biological properties to provide higher processing abilities. A review of recent developments in learning of spiking neurons is presented in this paper. First the biological background of SNN learning algorithms is reviewed. The important elements of a learning algorithm such as the neuron model, synaptic plasticity, information encoding and SNN topologies are then presented. Then, a critical review of the state-of-the-art learning algorithms for SNNs using single and multiple spikes is presented. Additionally, deep spiking neural networks are reviewed, and challenges and opportunities in the SNN field are discussed
ANN multiscale model of anti-HIV Drugs activity vs AIDS prevalence in the US at county level based on information indices of molecular graphs and social networks
[Abstract] This work is aimed at describing the workflow for a methodology that combines chemoinformatics and pharmacoepidemiology methods and at reporting the first predictive model developed with this methodology. The new model is able to predict complex networks of AIDS prevalence in the US counties, taking into consideration the social determinants and activity/structure of anti-HIV drugs in preclinical assays. We trained different Artificial Neural Networks (ANNs) using as input information indices of social networks and molecular graphs. We used a Shannon information index based on the Gini coefficient to quantify the effect of income inequality in the social network. We obtained the data on AIDS prevalence and the Gini coefficient from the AIDSVu database of Emory University. We also used the Balaban information indices to quantify changes in the chemical structure of anti-HIV drugs. We obtained the data on anti-HIV drug activity and structure (SMILE codes) from the ChEMBL database. Last, we used Box-Jenkins moving average operators to quantify information about the deviations of drugs with respect to data subsets of reference (targets, organisms, experimental parameters, protocols). The best model found was a Linear Neural Network (LNN) with values of Accuracy, Specificity, and Sensitivity above 0.76 and AUROC > 0.80 in training and external validation series. This model generates a complex network of AIDS prevalence in the US at county level with respect to the preclinical activity of anti-HIV drugs in preclinical assays. To train/validate the model and predict the complex network we needed to analyze 43,249 data points including values of AIDS prevalence in 2,310 counties in the US vs ChEMBL results for 21,582 unique drugs, 9 viral or human protein targets, 4,856 protocols, and 10 possible experimental measures.Ministerio de Educación, Cultura y Deportes; AGL2011-30563-C03-0
Analysis of the resume learning process for spiking neural networks
In this paper we perform an analysis of the learning process with the ReSuMe method and spiking neural networks (Ponulak, 2005; Ponulak, 2006b). We investigate how the particular parameters of the learning algorithm affect the process of learning. We consider the issue of speeding up the adaptation process, while maintaining the stability of the optimal solution. This is an important issue in many real-life tasks where the neural networks are applied and where the fast learning convergence is highly desirable
- …