1,133 research outputs found
E-PUR: An Energy-Efficient Processing Unit for Recurrent Neural Networks
Recurrent Neural Networks (RNNs) are a key technology for emerging
applications such as automatic speech recognition, machine translation or image
description. Long Short Term Memory (LSTM) networks are the most successful RNN
implementation, as they can learn long term dependencies to achieve high
accuracy. Unfortunately, the recurrent nature of LSTM networks significantly
constrains the amount of parallelism and, hence, multicore CPUs and many-core
GPUs exhibit poor efficiency for RNN inference. In this paper, we present
E-PUR, an energy-efficient processing unit tailored to the requirements of LSTM
computation. The main goal of E-PUR is to support large recurrent neural
networks for low-power mobile devices. E-PUR provides an efficient hardware
implementation of LSTM networks that is flexible to support diverse
applications. One of its main novelties is a technique that we call Maximizing
Weight Locality (MWL), which improves the temporal locality of the memory
accesses for fetching the synaptic weights, reducing the memory requirements by
a large extent. Our experimental results show that E-PUR achieves real-time
performance for different LSTM networks, while reducing energy consumption by
orders of magnitude with respect to general-purpose processors and GPUs, and it
requires a very small chip area. Compared to a modern mobile SoC, an NVIDIA
Tegra X1, E-PUR provides an average energy reduction of 92x
On the efficient representation and execution of deep acoustic models
In this paper we present a simple and computationally efficient quantization
scheme that enables us to reduce the resolution of the parameters of a neural
network from 32-bit floating point values to 8-bit integer values. The proposed
quantization scheme leads to significant memory savings and enables the use of
optimized hardware instructions for integer arithmetic, thus significantly
reducing the cost of inference. Finally, we propose a "quantization aware"
training process that applies the proposed scheme during network training and
find that it allows us to recover most of the loss in accuracy introduced by
quantization. We validate the proposed techniques by applying them to a long
short-term memory-based acoustic model on an open-ended large vocabulary speech
recognition task.Comment: Accepted conference paper: "The Annual Conference of the
International Speech Communication Association (Interspeech), 2016
Recommended from our members
Two efficient lattice rescoring methods using recurrent neural network language models
An important part of the language modelling problem for automatic speech recognition (ASR) systems, and many other related applications, is to appropriately model long-distance context dependencies in natural languages. Hence, statistical language models (LMs) that can model longer span history contexts, for example, recurrent neural network language models (RNNLMs), have become increasingly popular for state-of-the-art ASR systems. As RNNLMs use a vector representation of complete history contexts, they are normally used to rescore N-best lists. Motivated by their intrinsic characteristics, two efficient lattice rescoring methods for RNNLMs are proposed in this paper. The first method uses an -gram style clustering of history contexts. The second approach directly exploits the distance measure between recurrent hidden history vectors. Both methods produced 1-best performance comparable to a 10 k-best rescoring baseline RNNLM system on two large vocabulary conversational telephone speech recognition tasks for US English and Mandarin Chinese. Consistent lattice size compression and recognition performance improvements after confusion network (CN) decoding were also obtained over the prefix tree structured N-best rescoring approach.This work was supported by EPSRC under Grant EP/I031022/1 (Natural Speech Technology) and DARPA under the Broad Operational Language Translation and RATS programs. The work of X. Chen was supported by Toshiba Research Europe Ltd, Cambridge Research Lab.This is the author accepted manuscript. The final version is available from IEEE via http://dx.doi.org/10.1109/TASLP.2016.255882
- …