78,682 research outputs found
Character-Level Language Modeling with Deeper Self-Attention
LSTMs and other RNN variants have shown strong performance on character-level
language modeling. These models are typically trained using truncated
backpropagation through time, and it is common to assume that their success
stems from their ability to remember long-term contexts. In this paper, we show
that a deep (64-layer) transformer model with fixed context outperforms RNN
variants by a large margin, achieving state of the art on two popular
benchmarks: 1.13 bits per character on text8 and 1.06 on enwik8. To get good
results at this depth, we show that it is important to add auxiliary losses,
both at intermediate network layers and intermediate sequence positions.Comment: 8 pages, 7 figure
Language Modeling with Deep Transformers
We explore deep autoregressive Transformer models in language modeling for
speech recognition. We focus on two aspects. First, we revisit Transformer
model configurations specifically for language modeling. We show that well
configured Transformer models outperform our baseline models based on the
shallow stack of LSTM recurrent neural network layers. We carry out experiments
on the open-source LibriSpeech 960hr task, for both 200K vocabulary word-level
and 10K byte-pair encoding subword-level language modeling. We apply our
word-level models to conventional hybrid speech recognition by lattice
rescoring, and the subword-level models to attention based encoder-decoder
models by shallow fusion. Second, we show that deep Transformer language models
do not require positional encoding. The positional encoding is an essential
augmentation for the self-attention mechanism which is invariant to sequence
ordering. However, in autoregressive setup, as is the case for language
modeling, the amount of information increases along the position dimension,
which is a positional signal by its own. The analysis of attention weights
shows that deep autoregressive self-attention models can automatically make use
of such positional information. We find that removing the positional encoding
even slightly improves the performance of these models.Comment: To appear in the proceedings of INTERSPEECH 201
Adaptive Attention Span in Transformers
We propose a novel self-attention mechanism that can learn its optimal
attention span. This allows us to extend significantly the maximum context size
used in Transformer, while maintaining control over their memory footprint and
computational time. We show the effectiveness of our approach on the task of
character level language modeling, where we achieve state-of-the-art
performances on text8 and enwiki8 by using a maximum context of 8k characters.Comment: Accepted to ACL 201
What do Neural Machine Translation Models Learn about Morphology?
Neural machine translation (MT) models obtain state-of-the-art performance
while maintaining a simple, end-to-end architecture. However, little is known
about what these models learn about source and target languages during the
training process. In this work, we analyze the representations learned by
neural MT models at various levels of granularity and empirically evaluate the
quality of the representations for learning morphology through extrinsic
part-of-speech and morphological tagging tasks. We conduct a thorough
investigation along several parameters: word-based vs. character-based
representations, depth of the encoding layer, the identity of the target
language, and encoder vs. decoder representations. Our data-driven,
quantitative evaluation sheds light on important aspects in the neural MT
system and its ability to capture word structure.Comment: Updated decoder experiment
SALSA-TEXT : self attentive latent space based adversarial text generation
Inspired by the success of self attention mechanism and Transformer
architecture in sequence transduction and image generation applications, we
propose novel self attention-based architectures to improve the performance of
adversarial latent code- based schemes in text generation. Adversarial latent
code-based text generation has recently gained a lot of attention due to their
promising results. In this paper, we take a step to fortify the architectures
used in these setups, specifically AAE and ARAE. We benchmark two latent
code-based methods (AAE and ARAE) designed based on adversarial setups. In our
experiments, the Google sentence compression dataset is utilized to compare our
method with these methods using various objective and subjective measures. The
experiments demonstrate the proposed (self) attention-based models outperform
the state-of-the-art in adversarial code-based text generation.Comment: 10 pages, 3 figures, under review at ICLR 201
Game-Based Teaching Methodology and Empathy in Ethics Education
This article describes the experience of a group of educators participating in a graduate course in ethics. Playing role playing games and the work accompanying that play were the predominate methodology employed in the course. An accompanying research study investigated the lived experiences of the course participants. Themes that emerged from interview data included student engagement, participants’ applications, empathy development, and reactions to professor modeling
- …