36,661 research outputs found
Efficient Deep Feature Learning and Extraction via StochasticNets
Deep neural networks are a powerful tool for feature learning and extraction
given their ability to model high-level abstractions in highly complex data.
One area worth exploring in feature learning and extraction using deep neural
networks is efficient neural connectivity formation for faster feature learning
and extraction. Motivated by findings of stochastic synaptic connectivity
formation in the brain as well as the brain's uncanny ability to efficiently
represent information, we propose the efficient learning and extraction of
features via StochasticNets, where sparsely-connected deep neural networks can
be formed via stochastic connectivity between neurons. To evaluate the
feasibility of such a deep neural network architecture for feature learning and
extraction, we train deep convolutional StochasticNets to learn abstract
features using the CIFAR-10 dataset, and extract the learned features from
images to perform classification on the SVHN and STL-10 datasets. Experimental
results show that features learned using deep convolutional StochasticNets,
with fewer neural connections than conventional deep convolutional neural
networks, can allow for better or comparable classification accuracy than
conventional deep neural networks: relative test error decrease of ~4.5% for
classification on the STL-10 dataset and ~1% for classification on the SVHN
dataset. Furthermore, it was shown that the deep features extracted using deep
convolutional StochasticNets can provide comparable classification accuracy
even when only 10% of the training data is used for feature learning. Finally,
it was also shown that significant gains in feature extraction speed can be
achieved in embedded applications using StochasticNets. As such, StochasticNets
allow for faster feature learning and extraction performance while facilitate
for better or comparable accuracy performances.Comment: 10 pages. arXiv admin note: substantial text overlap with
arXiv:1508.0546
Diversity improves performance in excitable networks
As few real systems comprise indistinguishable units, diversity is a hallmark
of nature. Diversity among interacting units shapes properties of collective
behavior such as synchronization and information transmission. However, the
benefits of diversity on information processing at the edge of a phase
transition, ordinarily assumed to emerge from identical elements, remain
largely unexplored. Analyzing a general model of excitable systems with
heterogeneous excitability, we find that diversity can greatly enhance optimal
performance (by two orders of magnitude) when distinguishing incoming inputs.
Heterogeneous systems possess a subset of specialized elements whose capability
greatly exceeds that of the nonspecialized elements. Nonetheless, the behavior
of the whole network can outperform all subgroups. We also find that diversity
can yield multiple percolation, with performance optimized at tricriticality.
Our results are robust in specific and more realistic neuronal systems
comprising a combination of excitatory and inhibitory units, and indicate that
diversity-induced amplification can be harnessed by neuronal systems for
evaluating stimulus intensities.Comment: 17 pages, 7 figure
The brain: What is critical about it?
We review the recent proposal that the most fascinating brain properties are
related to the fact that it always stays close to a second order phase
transition. In such conditions, the collective of neuronal groups can reliably
generate robust and flexible behavior, because it is known that at the critical
point there is the largest abundance of metastable states to choose from. Here
we review the motivation, arguments and recent results, as well as further
implications of this view of the functioning brain.Comment: Proceedings of BIOCOMP2007 - Collective Dynamics: Topics on
Competition and Cooperation in the Biosciences. Vietri sul Mare, Italy (2007
Optimal percentage of inhibitory synapses in multi-task learning
Performing more tasks in parallel is a typical feature of complex brains.
These are characterized by the coexistence of excitatory and inhibitory
synapses, whose percentage in mammals is measured to have a typical value of
20-30\%. Here we investigate parallel learning of more Boolean rules in
neuronal networks. We find that multi-task learning results from the
alternation of learning and forgetting of the individual rules. Interestingly,
a fraction of 30\% inhibitory synapses optimizes the overall performance,
carving a complex backbone supporting information transmission with a minimal
shortest path length. We show that 30\% inhibitory synapses is the percentage
maximizing the learning performance since it guarantees, at the same time, the
network excitability necessary to express the response and the variability
required to confine the employment of resources.Comment: 5 pages, 5 figure
- …