1,886 research outputs found
Neuron-level fuzzy memoization in RNNs
The final publication is available at ACM via http://dx.doi.org/10.1145/3352460.3358309Recurrent Neural Networks (RNNs) are a key technology for applications such as automatic speech recognition or machine translation. Unlike conventional feed-forward DNNs, RNNs remember past information to improve the accuracy of future predictions and, therefore, they are very effective for sequence processing problems.
For each application run, each recurrent layer is executed many times for processing a potentially large sequence of inputs (words, images, audio frames, etc.). In this paper, we make the observation that the output of a neuron exhibits small changes in consecutive invocations. We exploit this property to build a neuron-level fuzzy memoization scheme, which dynamically caches the output of each neuron and reuses it whenever it is predicted that the current output will be similar to a previously computed result, avoiding in this way the output computations.
The main challenge in this scheme is determining whether the new neuron's output for the current input in the sequence will be similar to a recently computed result. To this end, we extend the recurrent layer with a much simpler Bitwise Neural Network (BNN), and show that the BNN and RNN outputs are highly correlated: if two BNN outputs are very similar, the corresponding outputs in the original RNN layer are likely to exhibit negligible changes. The BNN provides a low-cost and effective mechanism for deciding when fuzzy memoization can be applied with a small impact on accuracy.
We evaluate our memoization scheme on top of a state-of-the-art accelerator for RNNs, for a variety of different neural networks from multiple application domains. We show that our technique avoids more than 24.2% of computations, resulting in 18.5% energy savings and 1.35x speedup on average.Peer ReviewedPostprint (author's final draft
Boosting Handwriting Text Recognition in Small Databases with Transfer Learning
In this paper we deal with the offline handwriting text recognition (HTR)
problem with reduced training datasets. Recent HTR solutions based on
artificial neural networks exhibit remarkable solutions in referenced
databases. These deep learning neural networks are composed of both
convolutional (CNN) and long short-term memory recurrent units (LSTM). In
addition, connectionist temporal classification (CTC) is the key to avoid
segmentation at character level, greatly facilitating the labeling task. One of
the main drawbacks of the CNNLSTM-CTC (CLC) solutions is that they need a
considerable part of the text to be transcribed for every type of calligraphy,
typically in the order of a few thousands of lines. Furthermore, in some
scenarios the text to transcribe is not that long, e.g. in the Washington
database. The CLC typically overfits for this reduced number of training
samples. Our proposal is based on the transfer learning (TL) from the
parameters learned with a bigger database. We first investigate, for a reduced
and fixed number of training samples, 350 lines, how the learning from a large
database, the IAM, can be transferred to the learning of the CLC of a reduced
database, Washington. We focus on which layers of the network could be not
re-trained. We conclude that the best solution is to re-train the whole CLC
parameters initialized to the values obtained after the training of the CLC
from the larger database. We also investigate results when the training size is
further reduced. The differences in the CER are more remarkable when training
with just 350 lines, a CER of 3.3% is achieved with TL while we have a CER of
18.2% when training from scratch. As a byproduct, the learning times are quite
reduced. Similar good results are obtained from the Parzival database when
trained with this reduced number of lines and this new approach.Comment: ICFHR 2018 Conferenc
- …