61 research outputs found
Masked Language Model Scoring
Pretrained masked language models (MLMs) require finetuning for most NLP
tasks. Instead, we evaluate MLMs out of the box via their pseudo-log-likelihood
scores (PLLs), which are computed by masking tokens one by one. We show that
PLLs outperform scores from autoregressive language models like GPT-2 in a
variety of tasks. By rescoring ASR and NMT hypotheses, RoBERTa reduces an
end-to-end LibriSpeech model's WER by 30% relative and adds up to +1.7 BLEU on
state-of-the-art baselines for low-resource translation pairs, with further
gains from domain adaptation. We attribute this success to PLL's unsupervised
expression of linguistic acceptability without a left-to-right bias, greatly
improving on scores from GPT-2 (+10 points on island effects, NPI licensing in
BLiMP). One can finetune MLMs to give scores without masking, enabling
computation in a single inference pass. In all, PLLs and their associated
pseudo-perplexities (PPPLs) enable plug-and-play use of the growing number of
pretrained MLMs; e.g., we use a single cross-lingual model to rescore
translations in multiple languages. We release our library for language model
scoring at https://github.com/awslabs/mlm-scoring.Comment: ACL 2020 camera-ready (presented July 2020
Server-side Rescoring of Spoken Entity-centric Knowledge Queries for Virtual Assistants
On-device Virtual Assistants (VAs) powered by Automatic Speech Recognition
(ASR) require effective knowledge integration for the challenging entity-rich
query recognition. In this paper, we conduct an empirical study of modeling
strategies for server-side rescoring of spoken information domain queries using
various categories of Language Models (LMs) (N-gram word LMs, sub-word neural
LMs). We investigate the combination of on-device and server-side signals, and
demonstrate significant WER improvements of 23%-35% on various entity-centric
query subpopulations by integrating various server-side LMs compared to
performing ASR on-device only. We also perform a comparison between LMs trained
on domain data and a GPT-3 variant offered by OpenAI as a baseline.
Furthermore, we also show that model fusion of multiple server-side LMs trained
from scratch most effectively combines complementary strengths of each model
and integrates knowledge learned from domain-specific data to a VA ASR system
Low-rank Adaptation of Large Language Model Rescoring for Parameter-Efficient Speech Recognition
We propose a neural language modeling system based on low-rank adaptation
(LoRA) for speech recognition output rescoring. Although pretrained language
models (LMs) like BERT have shown superior performance in second-pass
rescoring, the high computational cost of scaling up the pretraining stage and
adapting the pretrained models to specific domains limit their practical use in
rescoring. Here we present a method based on low-rank decomposition to train a
rescoring BERT model and adapt it to new domains using only a fraction (0.08%)
of the pretrained parameters. These inserted matrices are optimized through a
discriminative training objective along with a correlation-based regularization
loss. The proposed low-rank adaptation Rescore-BERT (LoRB) architecture is
evaluated on LibriSpeech and internal datasets with decreased training times by
factors between 5.4 and 3.6.Comment: Accepted to IEEE ASRU 2023. Internal Review Approved. Revised 2nd
version with Andreas and Huck. The first version is in Sep 29th. 8 page
Scaling Laws for Discriminative Speech Recognition Rescoring Models
Recent studies have found that model performance has a smooth power-law
relationship, or scaling laws, with training data and model size, for a wide
range of problems. These scaling laws allow one to choose nearly optimal data
and model sizes. We study whether this scaling property is also applicable to
second-pass rescoring, which is an important component of speech recognition
systems. We focus on RescoreBERT as the rescoring model, which uses a
pre-trained Transformer-based architecture fined tuned with an ASR
discriminative loss. Using such a rescoring model, we show that the word error
rate (WER) follows a scaling law for over two orders of magnitude as training
data and model size increase. In addition, it is found that a pre-trained model
would require less data than a randomly initialized model of the same size,
representing effective data transferred from pre-training step. This effective
data transferred is found to also follow a scaling law with the data and model
size
Discriminative Speech Recognition Rescoring with Pre-trained Language Models
Second pass rescoring is a critical component of competitive automatic speech
recognition (ASR) systems. Large language models have demonstrated their
ability in using pre-trained information for better rescoring of ASR
hypothesis. Discriminative training, directly optimizing the minimum
word-error-rate (MWER) criterion typically improves rescoring. In this study,
we propose and explore several discriminative fine-tuning schemes for
pre-trained LMs. We propose two architectures based on different pooling
strategies of output embeddings and compare with probability based MWER. We
conduct detailed comparisons between pre-trained causal and bidirectional LMs
in discriminative settings. Experiments on LibriSpeech demonstrate that all
MWER training schemes are beneficial, giving additional gains upto 8.5\% WER.
Proposed pooling variants achieve lower latency while retaining most
improvements. Finally, our study concludes that bidirectionality is better
utilized with discriminative training.Comment: ASRU 202
Personalization for BERT-based Discriminative Speech Recognition Rescoring
Recognition of personalized content remains a challenge in end-to-end speech
recognition. We explore three novel approaches that use personalized content in
a neural rescoring step to improve recognition: gazetteers, prompting, and a
cross-attention based encoder-decoder model. We use internal de-identified
en-US data from interactions with a virtual voice assistant supplemented with
personalized named entities to compare these approaches. On a test set with
personalized named entities, we show that each of these approaches improves
word error rate by over 10%, against a neural rescoring baseline. We also show
that on this test set, natural language prompts can improve word error rate by
7% without any training and with a marginal loss in generalization. Overall,
gazetteers were found to perform the best with a 10% improvement in word error
rate (WER), while also improving WER on a general test set by 1%
Generative Speech Recognition Error Correction with Large Language Models and Task-Activating Prompting
We explore the ability of large language models (LLMs) to act as speech
recognition post-processors that perform rescoring and error correction. Our
first focus is on instruction prompting to let LLMs perform these task without
fine-tuning, for which we evaluate different prompting schemes, both zero- and
few-shot in-context learning, and a novel task activation prompting method that
combines causal instructions and demonstration to increase its context windows.
Next, we show that rescoring only by in-context learning with frozen LLMs
achieves results that are competitive with rescoring by domain-tuned LMs, using
a pretrained first-pass recognition system and rescoring output on two
out-of-domain tasks (ATIS and WSJ). By combining prompting techniques with
fine-tuning we achieve error rates below the N-best oracle level, showcasing
the generalization power of the LLMs.Comment: Accepted to IEEE Automatic Speech Recognition and Understanding
(ASRU) 2023. 8 pages. 2nd version revised from Sep 29th's versio
HyPoradise: An Open Baseline for Generative Speech Recognition with Large Language Models
Advancements in deep neural networks have allowed automatic speech
recognition (ASR) systems to attain human parity on several publicly available
clean speech datasets. However, even state-of-the-art ASR systems experience
performance degradation when confronted with adverse conditions, as a
well-trained acoustic model is sensitive to variations in the speech domain,
e.g., background noise. Intuitively, humans address this issue by relying on
their linguistic knowledge: the meaning of ambiguous spoken terms is usually
inferred from contextual cues thereby reducing the dependency on the auditory
system. Inspired by this observation, we introduce the first open-source
benchmark to utilize external large language models (LLMs) for ASR error
correction, where N-best decoding hypotheses provide informative elements for
true transcription prediction. This approach is a paradigm shift from the
traditional language model rescoring strategy that can only select one
candidate hypothesis as the output transcription. The proposed benchmark
contains a novel dataset, HyPoradise (HP), encompassing more than 334,000 pairs
of N-best hypotheses and corresponding accurate transcriptions across prevalent
speech domains. Given this dataset, we examine three types of error correction
techniques based on LLMs with varying amounts of labeled
hypotheses-transcription pairs, which gains a significant word error rate (WER)
reduction. Experimental evidence demonstrates the proposed technique achieves a
breakthrough by surpassing the upper bound of traditional re-ranking based
methods. More surprisingly, LLM with reasonable prompt and its generative
capability can even correct those tokens that are missing in N-best list. We
make our results publicly accessible for reproducible pipelines with released
pre-trained models, thus providing a new evaluation paradigm for ASR error
correction with LLMs.Comment: Accepted to NeurIPS 2023, 24 pages. Datasets and Benchmarks Track.
Added the first Mandarin and code-switching (zh-cn and en-us) results from
the LLM-based generative ASR error correction to Table 8 on Page 2
- …