23 research outputs found
Target-Speaker Voice Activity Detection: a Novel Approach for Multi-Speaker Diarization in a Dinner Party Scenario
Speaker diarization for real-life scenarios is an extremely challenging
problem. Widely used clustering-based diarization approaches perform rather
poorly in such conditions, mainly due to the limited ability to handle
overlapping speech. We propose a novel Target-Speaker Voice Activity Detection
(TS-VAD) approach, which directly predicts an activity of each speaker on each
time frame. TS-VAD model takes conventional speech features (e.g., MFCC) along
with i-vectors for each speaker as inputs. A set of binary classification
output layers produces activities of each speaker. I-vectors can be estimated
iteratively, starting with a strong clustering-based diarization. We also
extend the TS-VAD approach to the multi-microphone case using a simple
attention mechanism on top of hidden representations extracted from the
single-channel TS-VAD model. Moreover, post-processing strategies for the
predicted speaker activity probabilities are investigated. Experiments on the
CHiME-6 unsegmented data show that TS-VAD achieves state-of-the-art results
outperforming the baseline x-vector-based system by more than 30% Diarization
Error Rate (DER) abs.Comment: Accepted to Interspeech 202
Streaming Speaker-Attributed ASR with Token-Level Speaker Embeddings
This paper presents a streaming speaker-attributed automatic speech
recognition (SA-ASR) model that can recognize "who spoke what" with low latency
even when multiple people are speaking simultaneously. Our model is based on
token-level serialized output training (t-SOT) which was recently proposed to
transcribe multi-talker speech in a streaming fashion. To further recognize
speaker identities, we propose an encoder-decoder based speaker embedding
extractor that can estimate a speaker representation for each recognized token
not only from non-overlapping speech but also from overlapping speech. The
proposed speaker embedding, named t-vector, is extracted synchronously with the
t-SOT ASR model, enabling joint execution of speaker identification (SID) or
speaker diarization (SD) with the multi-talker transcription with low latency.
We evaluate the proposed model for a joint task of ASR and SID/SD by using
LibriSpeechMix and LibriCSS corpora. The proposed model achieves substantially
better accuracy than a prior streaming model and shows comparable or sometimes
even superior results to the state-of-the-art offline SA-ASR model.Comment: Submitted to Interspeech 202