427 research outputs found
Latent Class Model with Application to Speaker Diarization
In this paper, we apply a latent class model (LCM) to the task of speaker
diarization. LCM is similar to Patrick Kenny's variational Bayes (VB) method in
that it uses soft information and avoids premature hard decisions in its
iterations. In contrast to the VB method, which is based on a generative model,
LCM provides a framework allowing both generative and discriminative models.
The discriminative property is realized through the use of i-vector (Ivec),
probabilistic linear discriminative analysis (PLDA), and a support vector
machine (SVM) in this work. Systems denoted as LCM-Ivec-PLDA, LCM-Ivec-SVM, and
LCM-Ivec-Hybrid are introduced. In addition, three further improvements are
applied to enhance its performance. 1) Adding neighbor windows to extract more
speaker information for each short segment. 2) Using a hidden Markov model to
avoid frequent speaker change points. 3) Using an agglomerative hierarchical
cluster to do initialization and present hard and soft priors, in order to
overcome the problem of initial sensitivity. Experiments on the National
Institute of Standards and Technology Rich Transcription 2009 speaker
diarization database, under the condition of a single distant microphone, show
that the diarization error rate (DER) of the proposed methods has substantial
relative improvements compared with mainstream systems. Compared to the VB
method, the relative improvements of LCM-Ivec-PLDA, LCM-Ivec-SVM, and
LCM-Ivec-Hybrid systems are 23.5%, 27.1%, and 43.0%, respectively. Experiments
on our collected database, CALLHOME97, CALLHOME00 and SRE08 short2-summed trial
conditions also show that the proposed LCM-Ivec-Hybrid system has the best
overall performance
Predicting continuous conflict perception with Bayesian Gaussian processes
Conflict is one of the most important phenomena of social life, but it is still largely neglected by the computing community. This work proposes an approach
that detects common conversational social signals (loudness, overlapping speech,
etc.) and predicts the conflict level perceived by human observers in continuous,
non-categorical terms. The proposed regression approach is fully Bayesian and it
adopts Automatic Relevance Determination to identify the social signals that influence most the outcome of the prediction. The experiments are performed over the SSPNet Conflict Corpus, a publicly available collection of 1430 clips extracted from televised political debates (roughly 12 hours of material for 138 subjects in total). The results show that it is possible to achieve a correlation close to 0.8 between actual and predicted conflict perception
The CHiME-7 Challenge: System Description and Performance of NeMo Team's DASR System
We present the NVIDIA NeMo team's multi-channel speech recognition system for
the 7th CHiME Challenge Distant Automatic Speech Recognition (DASR) Task,
focusing on the development of a multi-channel, multi-speaker speech
recognition system tailored to transcribe speech from distributed microphones
and microphone arrays. The system predominantly comprises of the following
integral modules: the Speaker Diarization Module, Multi-channel Audio Front-End
Processing Module, and the ASR Module. These components collectively establish
a cascading system, meticulously processing multi-channel and multi-speaker
audio input. Moreover, this paper highlights the comprehensive optimization
process that significantly enhanced our system's performance. Our team's
submission is largely based on NeMo toolkits and will be publicly available
TOLD: A Novel Two-Stage Overlap-Aware Framework for Speaker Diarization
Recently, end-to-end neural diarization (EEND) is introduced and achieves
promising results in speaker-overlapped scenarios. In EEND, speaker diarization
is formulated as a multi-label prediction problem, where speaker activities are
estimated independently and their dependency are not well considered. To
overcome these disadvantages, we employ the power set encoding to reformulate
speaker diarization as a single-label classification problem and propose the
overlap-aware EEND (EEND-OLA) model, in which speaker overlaps and dependency
can be modeled explicitly. Inspired by the success of two-stage hybrid systems,
we further propose a novel Two-stage OverLap-aware Diarization framework (TOLD)
by involving a speaker overlap-aware post-processing (SOAP) model to
iteratively refine the diarization results of EEND-OLA. Experimental results
show that, compared with the original EEND, the proposed EEND-OLA achieves a
14.39% relative improvement in terms of diarization error rates (DER), and
utilizing SOAP provides another 19.33% relative improvement. As a result, our
method TOLD achieves a DER of 10.14% on the CALLHOME dataset, which is a new
state-of-the-art result on this benchmark to the best of our knowledge.Comment: Accepted by ICASSP202
SALSA: A Novel Dataset for Multimodal Group Behavior Analysis
Studying free-standing conversational groups (FCGs) in unstructured social
settings (e.g., cocktail party ) is gratifying due to the wealth of information
available at the group (mining social networks) and individual (recognizing
native behavioral and personality traits) levels. However, analyzing social
scenes involving FCGs is also highly challenging due to the difficulty in
extracting behavioral cues such as target locations, their speaking activity
and head/body pose due to crowdedness and presence of extreme occlusions. To
this end, we propose SALSA, a novel dataset facilitating multimodal and
Synergetic sociAL Scene Analysis, and make two main contributions to research
on automated social interaction analysis: (1) SALSA records social interactions
among 18 participants in a natural, indoor environment for over 60 minutes,
under the poster presentation and cocktail party contexts presenting
difficulties in the form of low-resolution images, lighting variations,
numerous occlusions, reverberations and interfering sound sources; (2) To
alleviate these problems we facilitate multimodal analysis by recording the
social interplay using four static surveillance cameras and sociometric badges
worn by each participant, comprising the microphone, accelerometer, bluetooth
and infrared sensors. In addition to raw data, we also provide annotations
concerning individuals' personality as well as their position, head, body
orientation and F-formation information over the entire event duration. Through
extensive experiments with state-of-the-art approaches, we show (a) the
limitations of current methods and (b) how the recorded multiple cues
synergetically aid automatic analysis of social interactions. SALSA is
available at http://tev.fbk.eu/salsa.Comment: 14 pages, 11 figure
Deep Self-Supervised Hierarchical Clustering for Speaker Diarization
The state-of-the-art speaker diarization systems use agglomerative
hierarchical clustering (AHC) which performs the clustering of previously
learned neural embeddings. While the clustering approach attempts to identify
speaker clusters, the AHC algorithm does not involve any further learning. In
this paper, we propose a novel algorithm for hierarchical clustering which
combines the speaker clustering along with a representation learning framework.
The proposed approach is based on principles of self-supervised learning where
the self-supervision is derived from the clustering algorithm. The
representation learning network is trained with a regularized triplet loss
using the clustering solution at the current step while the clustering
algorithm uses the deep embeddings from the representation learning step. By
combining the self-supervision based representation learning along with the
clustering algorithm, we show that the proposed algorithm improves
significantly 29% relative improvement) over the AHC algorithm with cosine
similarity for a speaker diarization task on CALLHOME dataset. In addition, the
proposed approach also improves over the state-of-the-art system with PLDA
affinity matrix with 10% relative improvement in DER.Comment: 5 pages, Accepted in Interspeech 202
- …