4,077 research outputs found
Language Modeling with Deep Transformers
We explore deep autoregressive Transformer models in language modeling for
speech recognition. We focus on two aspects. First, we revisit Transformer
model configurations specifically for language modeling. We show that well
configured Transformer models outperform our baseline models based on the
shallow stack of LSTM recurrent neural network layers. We carry out experiments
on the open-source LibriSpeech 960hr task, for both 200K vocabulary word-level
and 10K byte-pair encoding subword-level language modeling. We apply our
word-level models to conventional hybrid speech recognition by lattice
rescoring, and the subword-level models to attention based encoder-decoder
models by shallow fusion. Second, we show that deep Transformer language models
do not require positional encoding. The positional encoding is an essential
augmentation for the self-attention mechanism which is invariant to sequence
ordering. However, in autoregressive setup, as is the case for language
modeling, the amount of information increases along the position dimension,
which is a positional signal by its own. The analysis of attention weights
shows that deep autoregressive self-attention models can automatically make use
of such positional information. We find that removing the positional encoding
even slightly improves the performance of these models.Comment: To appear in the proceedings of INTERSPEECH 201
Enriching Rare Word Representations in Neural Language Models by Embedding Matrix Augmentation
The neural language models (NLM) achieve strong generalization capability by
learning the dense representation of words and using them to estimate
probability distribution function. However, learning the representation of rare
words is a challenging problem causing the NLM to produce unreliable
probability estimates. To address this problem, we propose a method to enrich
representations of rare words in pre-trained NLM and consequently improve its
probability estimation performance. The proposed method augments the word
embedding matrices of pre-trained NLM while keeping other parameters unchanged.
Specifically, our method updates the embedding vectors of rare words using
embedding vectors of other semantically and syntactically similar words. To
evaluate the proposed method, we enrich the rare street names in the
pre-trained NLM and use it to rescore 100-best hypotheses output from the
Singapore English speech recognition system. The enriched NLM reduces the word
error rate by 6% relative and improves the recognition accuracy of the rare
words by 16% absolute as compared to the baseline NLM.Comment: 5 pages, 2 figures, accepted to INTERSPEECH 201
Advances in All-Neural Speech Recognition
This paper advances the design of CTC-based all-neural (or end-to-end) speech
recognizers. We propose a novel symbol inventory, and a novel iterated-CTC
method in which a second system is used to transform a noisy initial output
into a cleaner version. We present a number of stabilization and initialization
methods we have found useful in training these networks. We evaluate our system
on the commonly used NIST 2000 conversational telephony test set, and
significantly exceed the previously published performance of similar systems,
both with and without the use of an external language model and decoding
technology
Optimization of supply diversity for the self-assembly of simple objects in two and three dimensions
The field of algorithmic self-assembly is concerned with the design and
analysis of self-assembly systems from a computational perspective, that is,
from the perspective of mathematical problems whose study may give insight into
the natural processes through which elementary objects self-assemble into more
complex ones. One of the main problems of algorithmic self-assembly is the
minimum tile set problem (MTSP), which asks for a collection of types of
elementary objects (called tiles) to be found for the self-assembly of an
object having a pre-established shape. Such a collection is to be as concise as
possible, thus minimizing supply diversity, while satisfying a set of stringent
constraints having to do with the termination and other properties of the
self-assembly process from its tile types. We present a study of what we think
is the first practical approach to MTSP. Our study starts with the introduction
of an evolutionary heuristic to tackle MTSP and includes results from extensive
experimentation with the heuristic on the self-assembly of simple objects in
two and three dimensions. The heuristic we introduce combines classic elements
from the field of evolutionary computation with a problem-specific variant of
Pareto dominance into a multi-objective approach to MTSP.Comment: Minor typos correcte
On the efficient numerical solution of lattice systems with low-order couplings
We apply the Quasi Monte Carlo (QMC) and recursive numerical integration
methods to evaluate the Euclidean, discretized time path-integral for the
quantum mechanical anharmonic oscillator and a topological quantum mechanical
rotor model. For the anharmonic oscillator both methods outperform standard
Markov Chain Monte Carlo methods and show a significantly improved error
scaling. For the quantum mechanical rotor we could, however, not find a
successful way employing QMC. On the other hand, the recursive numerical
integration method works extremely well for this model and shows an at least
exponentially fast error scaling
- …