996 research outputs found
Recurrent Neural Network Training with Dark Knowledge Transfer
Recurrent neural networks (RNNs), particularly long short-term memory (LSTM),
have gained much attention in automatic speech recognition (ASR). Although some
successful stories have been reported, training RNNs remains highly
challenging, especially with limited training data. Recent research found that
a well-trained model can be used as a teacher to train other child models, by
using the predictions generated by the teacher model as supervision. This
knowledge transfer learning has been employed to train simple neural nets with
a complex one, so that the final performance can reach a level that is
infeasible to obtain by regular training. In this paper, we employ the
knowledge transfer learning approach to train RNNs (precisely LSTM) using a
deep neural network (DNN) model as the teacher. This is different from most of
the existing research on knowledge transfer learning, since the teacher (DNN)
is assumed to be weaker than the child (RNN); however, our experiments on an
ASR task showed that it works fairly well: without applying any tricks on the
learning scheme, this approach can train RNNs successfully even with limited
training data.Comment: ICASSP 201
On the Concepts of Subject, Object, Subjectivity and Objectivity 2
This article gives Prof. Zhiyong Dong’s own definitions to the concepts of Subject, Object, Subjectivity and Objectivity.
On the Concepts of Human Individual Practice and Human Collective Practice
This article gives Prof. Zhiyong Dong’s own definitions on the Human Individual Practice and Human Collective Practice
On The Concepts of Subject, Object, Subjectivity and Objectivity 1
This article gives Prof. Zhiyong Dong’s own definitions to the concepts of Subject, Object, Subjectivity and Objectivity
- …