1,013 research outputs found
Deep Spiking Neural Networks with High Representation Similarity Model Visual Pathways of Macaque and Mouse
Deep artificial neural networks (ANNs) play a major role in modeling the
visual pathways of primate and rodent. However, they highly simplify the
computational properties of neurons compared to their biological counterparts.
Instead, Spiking Neural Networks (SNNs) are more biologically plausible models
since spiking neurons encode information with time sequences of spikes, just
like biological neurons do. However, there is a lack of studies on visual
pathways with deep SNNs models. In this study, we model the visual cortex with
deep SNNs for the first time, and also with a wide range of state-of-the-art
deep CNNs and ViTs for comparison. Using three similarity metrics, we conduct
neural representation similarity experiments on three neural datasets collected
from two species under three types of stimuli. Based on extensive similarity
analyses, we further investigate the functional hierarchy and mechanisms across
species. Almost all similarity scores of SNNs are higher than their
counterparts of CNNs with an average of 6.6%. Depths of the layers with the
highest similarity scores exhibit little differences across mouse cortical
regions, but vary significantly across macaque regions, suggesting that the
visual processing structure of mice is more regionally homogeneous than that of
macaques. Besides, the multi-branch structures observed in some top mouse
brain-like neural networks provide computational evidence of parallel
processing streams in mice, and the different performance in fitting macaque
neural representations under different stimuli exhibits the functional
specialization of information processing in macaques. Taken together, our study
demonstrates that SNNs could serve as promising candidates to better model and
explain the functional hierarchy and mechanisms of the visual system.Comment: Accepted by Proceedings of the 37th AAAI Conference on Artificial
Intelligence (AAAI-23
Deep recurrent spiking neural networks capture both static and dynamic representations of the visual cortex under movie stimuli
In the real world, visual stimuli received by the biological visual system
are predominantly dynamic rather than static. A better understanding of how the
visual cortex represents movie stimuli could provide deeper insight into the
information processing mechanisms of the visual system. Although some progress
has been made in modeling neural responses to natural movies with deep neural
networks, the visual representations of static and dynamic information under
such time-series visual stimuli remain to be further explored. In this work,
considering abundant recurrent connections in the mouse visual system, we
design a recurrent module based on the hierarchy of the mouse cortex and add it
into Deep Spiking Neural Networks, which have been demonstrated to be a more
compelling computational model for the visual cortex. Using Time-Series
Representational Similarity Analysis, we measure the representational
similarity between networks and mouse cortical regions under natural movie
stimuli. Subsequently, we conduct a comparison of the representational
similarity across recurrent/feedforward networks and image/video training
tasks. Trained on the video action recognition task, recurrent SNN achieves the
highest representational similarity and significantly outperforms feedforward
SNN trained on the same task by 15% and the recurrent SNN trained on the image
classification task by 8%. We investigate how static and dynamic
representations of SNNs influence the similarity, as a way to explain the
importance of these two forms of representations in biological neural coding.
Taken together, our work is the first to apply deep recurrent SNNs to model the
mouse visual cortex under movie stimuli and we establish that these networks
are competent to capture both static and dynamic representations and make
contributions to understanding the movie information processing mechanisms of
the visual cortex
Perceptron theory can predict the accuracy of neural networks
Multilayer neural networks set the current state of
the art for many technical classification problems. But, these
networks are still, essentially, black boxes in terms of analyzing
them and predicting their performance. Here, we develop a
statistical theory for the one-layer perceptron and show that
it can predict performances of a surprisingly large variety of
neural networks with different architectures. A general theory
of classification with perceptrons is developed by generalizing
an existing theory for analyzing reservoir computing models
and connectionist models for symbolic reasoning known as
vector symbolic architectures. Our statistical theory offers three
formulas leveraging the signal statistics with increasing detail.
The formulas are analytically intractable, but can be evaluated
numerically. The description level that captures maximum details
requires stochastic sampling methods. Depending on the network
model, the simpler formulas already yield high prediction accuracy.
The quality of the theory predictions is assessed in three
experimental settings, a memorization task for echo state networks
(ESNs) from reservoir computing literature, a collection of
classification datasets for shallow randomly connected networks,
and the ImageNet dataset for deep convolutional neural networks.
We find that the second description level of the perceptron theory
can predict the performance of types of ESNs, which could not
be described previously. Furthermore, the theory can predict
deep multilayer neural networks by being applied to their output
layer. While other methods for prediction of neural networks
performance commonly require to train an estimator model,
the proposed theory requires only the first two moments of
the distribution of the postsynaptic sums in the output neurons.
Moreover, the perceptron theory compares favorably to other
methods that do not rely on training an estimator model
Spiking Neural Networks for Computational Intelligence:An Overview
Deep neural networks with rate-based neurons have exhibited tremendous progress in the last decade. However, the same level of progress has not been observed in research on spiking neural networks (SNN), despite their capability to handle temporal data, energy-efficiency and low latency. This could be because the benchmarking techniques for SNNs are based on the methods used for evaluating deep neural networks, which do not provide a clear evaluation of the capabilities of SNNs. Particularly, the benchmarking of SNN approaches with regards to energy efficiency and latency requires realization in suitable hardware, which imposes additional temporal and resource constraints upon ongoing projects. This review aims to provide an overview of the current real-world applications of SNNs and identifies steps to accelerate research involving SNNs in the future
- …