2,991 research outputs found
Accelerating Training of Deep Neural Networks via Sparse Edge Processing
We propose a reconfigurable hardware architecture for deep neural networks
(DNNs) capable of online training and inference, which uses algorithmically
pre-determined, structured sparsity to significantly lower memory and
computational requirements. This novel architecture introduces the notion of
edge-processing to provide flexibility and combines junction pipelining and
operational parallelization to speed up training. The overall effect is to
reduce network complexity by factors up to 30x and training time by up to 35x
relative to GPUs, while maintaining high fidelity of inference results. This
has the potential to enable extensive parameter searches and development of the
largely unexplored theoretical foundation of DNNs. The architecture
automatically adapts itself to different network sizes given available hardware
resources. As proof of concept, we show results obtained for different bit
widths.Comment: Presented at the 26th International Conference on Artificial Neural
Networks (ICANN) 2017 in Alghero, Ital
What does fault tolerant Deep Learning need from MPI?
Deep Learning (DL) algorithms have become the de facto Machine Learning (ML)
algorithm for large scale data analysis. DL algorithms are computationally
expensive - even distributed DL implementations which use MPI require days of
training (model learning) time on commonly studied datasets. Long running DL
applications become susceptible to faults - requiring development of a fault
tolerant system infrastructure, in addition to fault tolerant DL algorithms.
This raises an important question: What is needed from MPI for de- signing
fault tolerant DL implementations? In this paper, we address this problem for
permanent faults. We motivate the need for a fault tolerant MPI specification
by an in-depth consideration of recent innovations in DL algorithms and their
properties, which drive the need for specific fault tolerance features. We
present an in-depth discussion on the suitability of different parallelism
types (model, data and hybrid); a need (or lack thereof) for check-pointing of
any critical data structures; and most importantly, consideration for several
fault tolerance proposals (user-level fault mitigation (ULFM), Reinit) in MPI
and their applicability to fault tolerant DL implementations. We leverage a
distributed memory implementation of Caffe, currently available under the
Machine Learning Toolkit for Extreme Scale (MaTEx). We implement our approaches
by ex- tending MaTEx-Caffe for using ULFM-based implementation. Our evaluation
using the ImageNet dataset and AlexNet, and GoogLeNet neural network topologies
demonstrates the effectiveness of the proposed fault tolerant DL implementation
using OpenMPI based ULFM
- …