98 research outputs found
Sequential Recurrent Neural Networks for Language Modeling
Feedforward Neural Network (FNN)-based language models estimate the
probability of the next word based on the history of the last N words, whereas
Recurrent Neural Networks (RNN) perform the same task based only on the last
word and some context information that cycles in the network. This paper
presents a novel approach, which bridges the gap between these two categories
of networks. In particular, we propose an architecture which takes advantage of
the explicit, sequential enumeration of the word history in FNN structure while
enhancing each word representation at the projection layer through recurrent
context information that evolves in the network. The context integration is
performed using an additional word-dependent weight matrix that is also learned
during the training. Extensive experiments conducted on the Penn Treebank (PTB)
and the Large Text Compression Benchmark (LTCB) corpus showed a significant
reduction of the perplexity when compared to state-of-the-art feedforward as
well as recurrent neural network architectures.Comment: published (INTERSPEECH 2016), 5 pages, 3 figures, 4 table
A Neural Network Approach for Mixing Language Models
The performance of Neural Network (NN)-based language models is steadily
improving due to the emergence of new architectures, which are able to learn
different natural language characteristics. This paper presents a novel
framework, which shows that a significant improvement can be achieved by
combining different existing heterogeneous models in a single architecture.
This is done through 1) a feature layer, which separately learns different
NN-based models and 2) a mixture layer, which merges the resulting model
features. In doing so, this architecture benefits from the learning
capabilities of each model with no noticeable increase in the number of model
parameters or the training time. Extensive experiments conducted on the Penn
Treebank (PTB) and the Large Text Compression Benchmark (LTCB) corpus showed a
significant reduction of the perplexity when compared to state-of-the-art
feedforward as well as recurrent neural network architectures.Comment: Published at IEEE International Conference on Acoustics, Speech and
Signal Processing (ICASSP) 2017. arXiv admin note: text overlap with
arXiv:1703.0806
Initialization of ReLUs for Dynamical Isometry
Deep learning relies on good initialization schemes and hyperparameter
choices prior to training a neural network. Random weight initializations
induce random network ensembles, which give rise to the trainability, training
speed, and sometimes also generalization ability of an instance. In addition,
such ensembles provide theoretical insights into the space of candidate models
of which one is selected during training. The results obtained so far rely on
mean field approximations that assume infinite layer width and that study
average squared signals. We derive the joint signal output distribution
exactly, without mean field assumptions, for fully-connected networks with
Gaussian weights and biases, and analyze deviations from the mean field
results. For rectified linear units, we further discuss limitations of the
standard initialization scheme, such as its lack of dynamical isometry, and
propose a simple alternative that overcomes these by initial parameter sharing.Comment: NeurIPS 201
TextGAIL: Generative Adversarial Imitation Learning for Text Generation
Generative Adversarial Networks (GANs) for text generation have recently
received many criticisms, as they perform worse than their MLE counterparts. We
suspect previous text GANs' inferior performance is due to the lack of a
reliable guiding signal in their discriminators. To address this problem, we
propose a generative adversarial imitation learning framework for text
generation that uses large pre-trained language models to provide more reliable
reward guidance. Our approach uses contrastive discriminator, and proximal
policy optimization (PPO) to stabilize and improve text generation performance.
For evaluation, we conduct experiments on a diverse set of unconditional and
conditional text generation tasks. Experimental results show that TextGAIL
achieves better performance in terms of both quality and diversity than the MLE
baseline. We also validate our intuition that TextGAIL's discriminator
demonstrates the capability of providing reasonable rewards with an additional
task.Comment: AAAI 202
- …