5,241 research outputs found
Exploring the Integration of Large Language Models into Automatic Speech Recognition Systems: An Empirical Study
This paper explores the integration of Large Language Models (LLMs) into
Automatic Speech Recognition (ASR) systems to improve transcription accuracy.
The increasing sophistication of LLMs, with their in-context learning
capabilities and instruction-following behavior, has drawn significant
attention in the field of Natural Language Processing (NLP). Our primary focus
is to investigate the potential of using an LLM's in-context learning
capabilities to enhance the performance of ASR systems, which currently face
challenges such as ambient noise, speaker accents, and complex linguistic
contexts. We designed a study using the Aishell-1 and LibriSpeech datasets,
with ChatGPT and GPT-4 serving as benchmarks for LLM capabilities.
Unfortunately, our initial experiments did not yield promising results,
indicating the complexity of leveraging LLM's in-context learning for ASR
applications. Despite further exploration with varied settings and models, the
corrected sentences from the LLMs frequently resulted in higher Word Error
Rates (WER), demonstrating the limitations of LLMs in speech applications. This
paper provides a detailed overview of these experiments, their results, and
implications, establishing that using LLMs' in-context learning capabilities to
correct potential errors in speech recognition transcriptions is still a
challenging task at the current stage
Robust audiovisual speech recognition using noise-adaptive linear discriminant analysis
© 2016 IEEE.Automatic speech recognition (ASR) has become a widespread and convenient mode of human-machine interaction, but it is still not sufficiently reliable when used under highly noisy or reverberant conditions. One option for achieving far greater robustness is to include another modality that is unaffected by acoustic noise, such as video information. Currently the most successful approaches for such audiovisual ASR systems, coupled hidden Markov models (HMMs) and turbo decoding, both allow for slight asynchrony between audio and video features, and significantly improve recognition rates in this way. However, both typically still neglect residual errors in the estimation of audio features, so-called observation uncertainties. This paper compares two strategies for adding these observation uncertainties into the decoder, and shows that significant recognition rate improvements are achievable for both coupled HMMs and turbo decoding
- …