4 research outputs found
Character-Level Language Modeling with Deeper Self-Attention
LSTMs and other RNN variants have shown strong performance on character-level
language modeling. These models are typically trained using truncated
backpropagation through time, and it is common to assume that their success
stems from their ability to remember long-term contexts. In this paper, we show
that a deep (64-layer) transformer model with fixed context outperforms RNN
variants by a large margin, achieving state of the art on two popular
benchmarks: 1.13 bits per character on text8 and 1.06 on enwik8. To get good
results at this depth, we show that it is important to add auxiliary losses,
both at intermediate network layers and intermediate sequence positions.Comment: 8 pages, 7 figure
English Machine Reading Comprehension Datasets: A Survey
This paper surveys 60 English Machine Reading Comprehension datasets, with a
view to providing a convenient resource for other researchers interested in
this problem. We categorize the datasets according to their question and answer
form and compare them across various dimensions including size, vocabulary,
data source, method of creation, human performance level, and first question
word. Our analysis reveals that Wikipedia is by far the most common data source
and that there is a relative lack of why, when, and where questions across
datasets.Comment: Will appear at EMNLP 2021. Dataset survey paper: 9 pages, 5 figures,
2 tables + attachmen