134 research outputs found
Training deep neural networks with low precision multiplications
Multipliers are the most space and power-hungry arithmetic operators of the
digital implementation of deep neural networks. We train a set of
state-of-the-art neural networks (Maxout networks) on three benchmark datasets:
MNIST, CIFAR-10 and SVHN. They are trained with three distinct formats:
floating point, fixed point and dynamic fixed point. For each of those datasets
and for each of those formats, we assess the impact of the precision of the
multiplications on the final error after training. We find that very low
precision is sufficient not just for running trained networks but also for
training them. For example, it is possible to train Maxout networks with 10
bits multiplications.Comment: 10 pages, 5 figures, Accepted as a workshop contribution at ICLR 201
BinaryConnect: Training Deep Neural Networks with binary weights during propagations
Deep Neural Networks (DNN) have achieved state-of-the-art results in a wide
range of tasks, with the best results obtained with large training sets and
large models. In the past, GPUs enabled these breakthroughs because of their
greater computational speed. In the future, faster computation at both training
and test time is likely to be crucial for further progress and for consumer
applications on low-power devices. As a result, there is much interest in
research and development of dedicated hardware for Deep Learning (DL). Binary
weights, i.e., weights which are constrained to only two possible values (e.g.
-1 or 1), would bring great benefits to specialized DL hardware by replacing
many multiply-accumulate operations by simple accumulations, as multipliers are
the most space and power-hungry components of the digital implementation of
neural networks. We introduce BinaryConnect, a method which consists in
training a DNN with binary weights during the forward and backward
propagations, while retaining precision of the stored weights in which
gradients are accumulated. Like other dropout schemes, we show that
BinaryConnect acts as regularizer and we obtain near state-of-the-art results
with BinaryConnect on the permutation-invariant MNIST, CIFAR-10 and SVHN.Comment: Accepted at NIPS 2015, 9 pages, 3 figure
Réduire la précision et le nombre des multiplications nécessaires à l'entraînement d'un réseau de neurones
RÉSUMÉ Les Réseaux de Neurones (RdNs) sont à l’état de l’art pour un grand nombre de tâches, les meilleurs résultats étant obtenus avec de grands ensembles de données et de grands modèles. La vitesse de calcul des cartes graphiques est en grande partie à l’origine de ces progrès. À l’avenir, l’accélération des RdNs pendant les phases d’entrainement et de test permettra probablement une performance accrue ainsi que des applications grand public plus efficaces énergétiquement. En conséquence, la recherche en systèmes numériques dédiés aux RdNs est d’actualité. Les systèmes numériques sont principalement faits de mémoires et d’opérateurs arithmétiques. Les multiplieurs sont de loin les opérateurs arithmétiques les plus coûteux en termes de transistors d’un système numérique dédié aux RdNs. Dans notre premier article, nous entraînons un ensemble de RdNs à l’état de l’art (les réseaux Maxout) sur trois ensembles de données de référence : MNIST, CIFAR-10 et SVHN. Ils sont entraînés avec trois formats distincts : virgule flottante, virgule fixe et virgule fixe dynamique. Pour chacun de ces ensembles de données et pour chacun de ces formats, nous évaluons l’impact de la précision des multiplications sur l’erreur finale après l’entrainement. Nous trouvons qu’une précision très faible est suffisante non seulement pour tester des RdNs, mais aussi pour les entraîner. Par exemple, il est possible d’entraîner des réseaux Maxout avec des multiplications 10 bits. Des poids binaires, c’est à dire des poids qui sont contraints à seulement deux valeurs possibles (e.g. -1 ou 1), permettraient de beaucoup réduire le nombre de multiplications nécessaires lors de l’entraînement d’un RdN. Dans notre deuxième article, nous introduisons BinaryConnect, une méthode qui consiste à entraîner un RdN avec des poids binaires durant les propagations en avant et en arrière, tout en conservant la précision des poids stockés dans lesquels les gradients sont accumulés. Comme les autres variantes de Dropout, nous montrons que BinaryConnect agit comme régulariseur et nous obtenons des résultats proches de l’état de l’art avec BinaryConnect sur le MNIST invariant aux permutations. ----------ABSTRACT Deep Neural Networks (DNNs) have achieved state-of-the-art results in a wide range of tasks, with the best results obtained with large training sets and large models. In the past, GPUs enabled these breakthroughs because of their greater computational speed. In the future, faster computation at both training and test time is likely to be crucial for further progress and for consumer applications on low-power devices. As a result, there is much interest in research and development of dedicated hardware for Deep Learning (DL). Computer hardware is mainly made out of memories and arithmetic operators. Multipliers are by far the most space and power-hungry arithmetic operators of the digital implementation of neural networks.
In our first article, we train a set of state-of-the-art neural networks (Maxout networks) on three benchmark datasets: MNIST, CIFAR-10 and SVHN. They are trained with three distinct formats: floating point, fixed point and dynamic fixed point. For each of those datasets and for each of those formats, we assess the impact of the precision of the multiplications on the final error after training. We find that very low precision is sufficient not just for running trained networks but also for training them. For example, it is possible to train Maxout networks with 10 bits multiplications. Binary weights, i.e., weights which are constrained to only two possible values (e.g. -1 or 1), would greatly reduce the number of multiplications required to train a DL. In our second article, we introduce BinaryConnect, a method which consists in training a DNN with binary weights during the forward and backward propagations, while retaining precision of the stored weights in which gradients are accumulated. Like other dropout schemes, we show that BinaryConnect acts as regularizer and we obtain near state-of-the-art results with BinaryConnect on the permutation-invariant MNIST
FFT-Based Deep Learning Deployment in Embedded Systems
Deep learning has delivered its powerfulness in many application domains,
especially in image and speech recognition. As the backbone of deep learning,
deep neural networks (DNNs) consist of multiple layers of various types with
hundreds to thousands of neurons. Embedded platforms are now becoming essential
for deep learning deployment due to their portability, versatility, and energy
efficiency. The large model size of DNNs, while providing excellent accuracy,
also burdens the embedded platforms with intensive computation and storage.
Researchers have investigated on reducing DNN model size with negligible
accuracy loss. This work proposes a Fast Fourier Transform (FFT)-based DNN
training and inference model suitable for embedded platforms with reduced
asymptotic complexity of both computation and storage, making our approach
distinguished from existing approaches. We develop the training and inference
algorithms based on FFT as the computing kernel and deploy the FFT-based
inference model on embedded platforms achieving extraordinary processing speed.Comment: Design, Automation, and Test in Europe (DATE) For source code, please
contact Mahdi Nazemi at <[email protected]
In-situ Stochastic Training of MTJ Crossbar based Neural Networks
Owing to high device density, scalability and non-volatility, Magnetic Tunnel
Junction-based crossbars have garnered significant interest for implementing
the weights of an artificial neural network. The existence of only two stable
states in MTJs implies a high overhead of obtaining optimal binary weights in
software. We illustrate that the inherent parallelism in the crossbar structure
makes it highly appropriate for in-situ training, wherein the network is taught
directly on the hardware. It leads to significantly smaller training overhead
as the training time is independent of the size of the network, while also
circumventing the effects of alternate current paths in the crossbar and
accounting for manufacturing variations in the device. We show how the
stochastic switching characteristics of MTJs can be leveraged to perform
probabilistic weight updates using the gradient descent algorithm. We describe
how the update operations can be performed on crossbars both with and without
access transistors and perform simulations on them to demonstrate the
effectiveness of our techniques. The results reveal that stochastically trained
MTJ-crossbar NNs achieve a classification accuracy nearly same as that of
real-valued-weight networks trained in software and exhibit immunity to device
variations.Comment: Accepted for poster presentation in the 2018 ACM/IEEE International
Symposium on Low Power Electronics and Design (ISLPED
PatDNN: Achieving Real-Time DNN Execution on Mobile Devices with Pattern-based Weight Pruning
With the emergence of a spectrum of high-end mobile devices, many
applications that formerly required desktop-level computation capability are
being transferred to these devices. However, executing the inference of Deep
Neural Networks (DNNs) is still challenging considering high computation and
storage demands, specifically, if real-time performance with high accuracy is
needed. Weight pruning of DNNs is proposed, but existing schemes represent two
extremes in the design space: non-structured pruning is fine-grained, accurate,
but not hardware friendly; structured pruning is coarse-grained,
hardware-efficient, but with higher accuracy loss. In this paper, we introduce
a new dimension, fine-grained pruning patterns inside the coarse-grained
structures, revealing a previously unknown point in design space. With the
higher accuracy enabled by fine-grained pruning patterns, the unique insight is
to use the compiler to re-gain and guarantee high hardware efficiency. In other
words, our method achieves the best of both worlds, and is desirable across
theory/algorithm, compiler, and hardware levels. The proposed PatDNN is an
end-to-end framework to efficiently execute DNN on mobile devices with the help
of a novel model compression technique (pattern-based pruning based on extended
ADMM solution framework) and a set of thorough architecture-aware compiler- and
code generation-based optimizations (filter kernel reordering, compressed
weight storage, register load redundancy elimination, and parameter
auto-tuning). Evaluation results demonstrate that PatDNN outperforms three
state-of-the-art end-to-end DNN frameworks, TensorFlow Lite, TVM, and Alibaba
Mobile Neural Network with speedup up to 44.5x, 11.4x, and 7.1x, respectively,
with no accuracy compromise. Real-time inference of representative large-scale
DNNs (e.g., VGG-16, ResNet-50) can be achieved using mobile devices.Comment: To be published in the Proceedings of Twenty-Fifth International
Conference on Architectural Support for Programming Languages and Operating
Systems (ASPLOS 20
Robust smoothing of left-censored time series data with a dynamic linear model to infer SARS-CoV-2 RNA concentrations in wastewater
Wastewater sampling for the detection and monitoring of SARS-CoV-2 has been developed and applied at an unprecedented pace, however uncertainty remains when interpreting the measured viral RNA signals and their spatiotemporal variation. The proliferation of measurements that are below a quantifiable threshold, usually during non-endemic periods, poses a further challenge to interpretation and time-series analysis of the data. Inspired by research in the use of a custom Kalman smoother model to estimate the true level of SARS-CoV-2 RNA concentrations in wastewater, we propose an alternative left-censored dynamic linear model. Cross-validation of both models alongside a simple moving average, using data from 286 sewage treatment works across England, allows for a comprehensive validation of the proposed approach. The presented dynamic linear model is more parsimonious, has a faster computational time and is represented by a more flexible modelling framework than the equivalent Kalman smoother. Furthermore we show how the use of wastewater data, transformed by such models, correlates more closely with regional case rate positivity as published by the Office for National Statistics (ONS) Coronavirus (COVID-19) Infection Survey. The modelled output is more robust and is therefore capable of better complementing traditional surveillance than untransformed data or a simple moving average, providing additional confidence and utility for public health decision making.
La détection et la surveillance du SARS-CoV-2 dans les eaux usées ont été développées et réalisées à un rythme sans précédent, mais l'interprétation des mesures de concentrations en ARN viral, et de leurs variations spatio-temporelles, pose question. En particulier, l'importante proportion de mesures en deçà du seuil de quantification, généralement pendant les périodes non endémiques, constitue un défi pour l'analyse de ces séries temporelles. Inspirés par un travail de recherche ayant produit un lisseur de Kalman adapté pour estimer les concentrations réelles en ARN de SARS-CoV-2 dans les eaux usées à partir de ce type de données, nous proposons un nouveau modèle linéaire dynamique avec censure à gauche. Une validation croisée de ces lisseurs, ainsi que d'un simple lissage par moyenne glissante, sur des données provenant de 286 stations d'épuration couvrant l'Angleterre, valide de façon complète l'approche proposée. Le modèle présenté est plus parcimonieux, offre un cadre de modélisation plus flexible et nécessite un temps de calcul réduit par rapport au Lisseur de Kalman équivalent. Les données issues des eaux usées ainsi lissées sont en outre plus fortement corrélées avec le taux d'incidence régional produit par le bureau des statistiques nationales (ONS) Coronavirus Infection Survey. Elles se montrent plus robustes que les données brutes, ou lissées par simple moyenne glissante, et donc plus à même de compléter la surveillance traditionnelle, renforçant ainsi la confiance en l'épidémiologie fondée sur les eaux usées et son utilité pour la prise de décisions de santé publique
- …