2 research outputs found
ASR Error Detection via Audio-Transcript entailment
Despite improved performances of the latest Automatic Speech Recognition
(ASR) systems, transcription errors are still unavoidable. These errors can
have a considerable impact in critical domains such as healthcare, when used to
help with clinical documentation. Therefore, detecting ASR errors is a critical
first step in preventing further error propagation to downstream applications.
To this end, we propose a novel end-to-end approach for ASR error detection
using audio-transcript entailment. To the best of our knowledge, we are the
first to frame this problem as an end-to-end entailment task between the audio
segment and its corresponding transcript segment. Our intuition is that there
should be a bidirectional entailment between audio and transcript when there is
no recognition error and vice versa. The proposed model utilizes an acoustic
encoder and a linguistic encoder to model the speech and transcript
respectively. The encoded representations of both modalities are fused to
predict the entailment. Since doctor-patient conversations are used in our
experiments, a particular emphasis is placed on medical terms. Our proposed
model achieves classification error rates (CER) of 26.2% on all transcription
errors and 23% on medical errors specifically, leading to improvements upon a
strong baseline by 12% and 15.4%, respectively.Comment: Accepted to Interspeech 202
Adapting an Unadaptable ASR System
As speech recognition model sizes and training data requirements grow, it is
increasingly common for systems to only be available via APIs from online
service providers rather than having direct access to models themselves. In
this scenario it is challenging to adapt systems to a specific target domain.
To address this problem we consider the recently released OpenAI Whisper ASR as
an example of a large-scale ASR system to assess adaptation methods. An error
correction based approach is adopted, as this does not require access to the
model, but can be trained from either 1-best or N-best outputs that are
normally available via the ASR API. LibriSpeech is used as the primary target
domain for adaptation. The generalization ability of the system in two distinct
dimensions are then evaluated. First, whether the form of correction model is
portable to other speech recognition domains, and secondly whether it can be
used for ASR models having a different architecture.Comment: submitted to INTERSPEEC