36 research outputs found
OxfordVGG Submission to the EGO4D AV Transcription Challenge
This report presents the technical details of our submission on the EGO4D
Audio-Visual (AV) Automatic Speech Recognition Challenge 2023 from the
OxfordVGG team. We present WhisperX, a system for efficient speech
transcription of long-form audio with word-level time alignment, along with two
text normalisers which are publicly available. Our final submission obtained
56.0% of the Word Error Rate (WER) on the challenge test set, ranked 1st on the
leaderboard. All baseline codes and models are available on
https://github.com/m-bain/whisperX.Comment: Technical Repor
Look, listen and recognise: character-aware audio-visual subtitling
The goal of this paper is automatic character-aware subtitle generation. Given a video and a minimal amount of metadata, we propose an audio-visual method that generates a full transcript of the dialogue, with precise speech timestamps, and the character speaking identified. The key idea is to first use audio-visual cues to select a set of high-precision audio exemplars for each character, and then use these exemplars to classify all speech segments by speaker identity. Notably, the method does not require face detection or tracking. We evaluate the method over a variety of TV sitcoms, including Seinfeld, Fraiser and Scrubs. We envision this system being useful for the automatic generation of subtitles to improve the accessibility of the vast amount of videos available on modern streaming services. Project page : https://www.robots.ox.ac.uk/~vgg/research/look-listen-recognise
WhisperX: Time-Accurate Speech Transcription of Long-Form Audio
Large-scale, weakly-supervised speech recognition models, such as Whisper,
have demonstrated impressive results on speech recognition across domains and
languages. However, their application to long audio transcription via buffered
or sliding window approaches is prone to drifting, hallucination & repetition;
and prohibits batched transcription due to their sequential nature. Further,
timestamps corresponding each utterance are prone to inaccuracies and
word-level timestamps are not available out-of-the-box. To overcome these
challenges, we present WhisperX, a time-accurate speech recognition system with
word-level timestamps utilising voice activity detection and forced phoneme
alignment. In doing so, we demonstrate state-of-the-art performance on
long-form transcription and word segmentation benchmarks. Additionally, we show
that pre-segmenting audio with our proposed VAD Cut & Merge strategy improves
transcription quality and enables a twelve-fold transcription speedup via
batched inference.Comment: Accepted to INTERSPEECH 202
With a Little Help from my Temporal Context: Multimodal Egocentric Action Recognition
In egocentric videos, actions occur in quick succession. We capitalise on the
action's temporal context and propose a method that learns to attend to
surrounding actions in order to improve recognition performance. To incorporate
the temporal context, we propose a transformer-based multimodal model that
ingests video and audio as input modalities, with an explicit language model
providing action sequence context to enhance the predictions. We test our
approach on EPIC-KITCHENS and EGTEA datasets reporting state-of-the-art
performance. Our ablations showcase the advantage of utilising temporal context
as well as incorporating audio input modality and language model to rescore
predictions. Code and models at: https://github.com/ekazakos/MTCN.Comment: Accepted at BMVC 202
Spot the conversation: speaker diarisation in the wild
The goal of this paper is speaker diarisation of videos collected 'in the
wild'. We make three key contributions. First, we propose an automatic
audio-visual diarisation method for YouTube videos. Our method consists of
active speaker detection using audio-visual methods and speaker verification
using self-enrolled speaker models. Second, we integrate our method into a
semi-automatic dataset creation pipeline which significantly reduces the number
of hours required to annotate videos with diarisation labels. Finally, we use
this pipeline to create a large-scale diarisation dataset called VoxConverse,
collected from 'in the wild' videos, which we will release publicly to the
research community. Our dataset consists of overlapping speech, a large and
diverse speaker pool, and challenging background conditions.Comment: The dataset will be available for download from
http://www.robots.ox.ac.uk/~vgg/data/voxceleb/voxconverse.html . The
development set will be released in July 2020, and the test set will be
released in October 202
The VoxCeleb speaker recognition challenge: a retrospective
The VoxCeleb Speaker Recognition Challenges (VoxSRC) were a series of challenges and workshops that ran annually from 2019 to 2023. The challenges primarily evaluated the tasks of speaker recognition and diarisation under various settings including: closed and open training data; as well as supervised, self-supervised, and semi-supervised training for domain adaptation. The challenges also provided publicly available training and evaluation datasets for each task and setting, with new test sets released each year. In this paper, we provide a review of these challenges that covers: what they explored; the methods developed by the challenge participants and how these evolved; and also the current state of the field for speaker verification and diarisation. We chart the progress in performance over the five installments of the challenge on a common evaluation dataset and provide a detailed analysis of how each year's special focus affected participants' performance. This paper is aimed both at researchers who want an overview of the speaker recognition and diarisation field, and also at challenge organisers who want to benefit from the successes and avoid the mistakes of the VoxSRC challenges. We end with a discussion of the current strengths of the field and open challenges