753 research outputs found
Sparse neural networks with large learning diversity
Coded recurrent neural networks with three levels of sparsity are introduced.
The first level is related to the size of messages, much smaller than the
number of available neurons. The second one is provided by a particular coding
rule, acting as a local constraint in the neural activity. The third one is a
characteristic of the low final connection density of the network after the
learning phase. Though the proposed network is very simple since it is based on
binary neurons and binary connections, it is able to learn a large number of
messages and recall them, even in presence of strong erasures. The performance
of the network is assessed as a classifier and as an associative memory
Truly Sparse Neural Networks at Scale
Recently, sparse training methods have started to be established as a de
facto approach for training and inference efficiency in artificial neural
networks. Yet, this efficiency is just in theory. In practice, everyone uses a
binary mask to simulate sparsity since the typical deep learning software and
hardware are optimized for dense matrix operations. In this paper, we take an
orthogonal approach, and we show that we can train truly sparse neural networks
to harvest their full potential. To achieve this goal, we introduce three novel
contributions, specially designed for sparse neural networks: (1) a parallel
training algorithm and its corresponding sparse implementation from scratch,
(2) an activation function with non-trainable parameters to favour the gradient
flow, and (3) a hidden neurons importance metric to eliminate redundancies. All
in one, we are able to break the record and to train the largest neural network
ever trained in terms of representational power -- reaching the bat brain size.
The results show that our approach has state-of-the-art performance while
opening the path for an environmentally friendly artificial intelligence era.Comment: 30 pages, 17 figure
- …