2,176 research outputs found
BinaryConnect: Training Deep Neural Networks with binary weights during propagations
Deep Neural Networks (DNN) have achieved state-of-the-art results in a wide
range of tasks, with the best results obtained with large training sets and
large models. In the past, GPUs enabled these breakthroughs because of their
greater computational speed. In the future, faster computation at both training
and test time is likely to be crucial for further progress and for consumer
applications on low-power devices. As a result, there is much interest in
research and development of dedicated hardware for Deep Learning (DL). Binary
weights, i.e., weights which are constrained to only two possible values (e.g.
-1 or 1), would bring great benefits to specialized DL hardware by replacing
many multiply-accumulate operations by simple accumulations, as multipliers are
the most space and power-hungry components of the digital implementation of
neural networks. We introduce BinaryConnect, a method which consists in
training a DNN with binary weights during the forward and backward
propagations, while retaining precision of the stored weights in which
gradients are accumulated. Like other dropout schemes, we show that
BinaryConnect acts as regularizer and we obtain near state-of-the-art results
with BinaryConnect on the permutation-invariant MNIST, CIFAR-10 and SVHN.Comment: Accepted at NIPS 2015, 9 pages, 3 figure
Mean Field Bayes Backpropagation: scalable training of multilayer neural networks with binary weights
Significant success has been reported recently using deep neural networks for
classification. Such large networks can be computationally intensive, even
after training is over. Implementing these trained networks in hardware chips
with a limited precision of synaptic weights may improve their speed and energy
efficiency by several orders of magnitude, thus enabling their integration into
small and low-power electronic devices. With this motivation, we develop a
computationally efficient learning algorithm for multilayer neural networks
with binary weights, assuming all the hidden neurons have a fan-out of one.
This algorithm, derived within a Bayesian probabilistic online setting, is
shown to work well for both synthetic and real-world problems, performing
comparably to algorithms with real-valued weights, while retaining
computational tractability
Improving classification accuracy of feedforward neural networks for spiking neuromorphic chips
Deep Neural Networks (DNN) achieve human level performance in many image
analytics tasks but DNNs are mostly deployed to GPU platforms that consume a
considerable amount of power. New hardware platforms using lower precision
arithmetic achieve drastic reductions in power consumption. More recently,
brain-inspired spiking neuromorphic chips have achieved even lower power
consumption, on the order of milliwatts, while still offering real-time
processing.
However, for deploying DNNs to energy efficient neuromorphic chips the
incompatibility between continuous neurons and synaptic weights of traditional
DNNs, discrete spiking neurons and synapses of neuromorphic chips need to be
overcome. Previous work has achieved this by training a network to learn
continuous probabilities, before it is deployed to a neuromorphic architecture,
such as IBM TrueNorth Neurosynaptic System, by random sampling these
probabilities.
The main contribution of this paper is a new learning algorithm that learns a
TrueNorth configuration ready for deployment. We achieve this by training
directly a binary hardware crossbar that accommodates the TrueNorth axon
configuration constrains and we propose a different neuron model.
Results of our approach trained on electroencephalogram (EEG) data show a
significant improvement with previous work (76% vs 86% accuracy) while
maintaining state of the art performance on the MNIST handwritten data set.Comment: IJCAI-2017. arXiv admin note: text overlap with arXiv:1605.0774
- …