420 research outputs found
FastCorrect: Fast Error Correction with Edit Alignment for Automatic Speech Recognition
Error correction techniques have been used to refine the output sentences
from automatic speech recognition (ASR) models and achieve a lower word error
rate (WER) than original ASR outputs. Previous works usually use a
sequence-to-sequence model to correct an ASR output sentence autoregressively,
which causes large latency and cannot be deployed in online ASR services. A
straightforward solution to reduce latency, inspired by non-autoregressive
(NAR) neural machine translation, is to use an NAR sequence generation model
for ASR error correction, which, however, comes at the cost of significantly
increased ASR error rate. In this paper, observing distinctive error patterns
and correction operations (i.e., insertion, deletion, and substitution) in ASR,
we propose FastCorrect, a novel NAR error correction model based on edit
alignment. In training, FastCorrect aligns each source token from an ASR output
sentence to the target tokens from the corresponding ground-truth sentence
based on the edit distance between the source and target sentences, and
extracts the number of target tokens corresponding to each source token during
edition/correction, which is then used to train a length predictor and to
adjust the source tokens to match the length of the target sentence for
parallel generation. In inference, the token number predicted by the length
predictor is used to adjust the source tokens for target sequence generation.
Experiments on the public AISHELL-1 dataset and an internal industrial-scale
ASR dataset show the effectiveness of FastCorrect for ASR error correction: 1)
it speeds up the inference by 6-9 times and maintains the accuracy (8-14% WER
reduction) compared with the autoregressive correction model; and 2) it
outperforms the popular NAR models adopted in neural machine translation and
text edition by a large margin.Comment: NeurIPS 2021. Code URL: https://github.com/microsoft/NeuralSpeec
Toward Practical Automatic Speech Recognition and Post-Processing: a Call for Explainable Error Benchmark Guideline
Automatic speech recognition (ASR) outcomes serve as input for downstream
tasks, substantially impacting the satisfaction level of end-users. Hence, the
diagnosis and enhancement of the vulnerabilities present in the ASR model bear
significant importance. However, traditional evaluation methodologies of ASR
systems generate a singular, composite quantitative metric, which fails to
provide comprehensive insight into specific vulnerabilities. This lack of
detail extends to the post-processing stage, resulting in further obfuscation
of potential weaknesses. Despite an ASR model's ability to recognize utterances
accurately, subpar readability can negatively affect user satisfaction, giving
rise to a trade-off between recognition accuracy and user-friendliness. To
effectively address this, it is imperative to consider both the speech-level,
crucial for recognition accuracy, and the text-level, critical for
user-friendliness. Consequently, we propose the development of an Error
Explainable Benchmark (EEB) dataset. This dataset, while considering both
speech- and text-level, enables a granular understanding of the model's
shortcomings. Our proposition provides a structured pathway for a more
`real-world-centric' evaluation, a marked shift away from abstracted,
traditional methods, allowing for the detection and rectification of nuanced
system weaknesses, ultimately aiming for an improved user experience.Comment: Accepted for Data-centric Machine Learning Research (DMLR) Workshop
at ICML 202
Speakerly: A Voice-based Writing Assistant for Text Composition
We present Speakerly, a new real-time voice-based writing assistance system
that helps users with text composition across various use cases such as emails,
instant messages, and notes. The user can interact with the system through
instructions or dictation, and the system generates a well-formatted and
coherent document. We describe the system architecture and detail how we
address the various challenges while building and deploying such a system at
scale. More specifically, our system uses a combination of small, task-specific
models as well as pre-trained language models for fast and effective text
composition while supporting a variety of input modes for better usability.Comment: Accepted at EMNLP 2023 Industry Trac
- …