16,221 research outputs found
Graph-based Multi-View Fusion and Local Adaptation: Mitigating Within-Household Confusability for Speaker Identification
Speaker identification (SID) in the household scenario (e.g., for smart
speakers) is an important but challenging problem due to limited number of
labeled (enrollment) utterances, confusable voices, and demographic imbalances.
Conventional speaker recognition systems generalize from a large random sample
of speakers, causing the recognition to underperform for households drawn from
specific cohorts or otherwise exhibiting high confusability. In this work, we
propose a graph-based semi-supervised learning approach to improve
household-level SID accuracy and robustness with locally adapted graph
normalization and multi-signal fusion with multi-view graphs. Unlike other work
on household SID, fairness, and signal fusion, this work focuses on speaker
label inference (scoring) and provides a simple solution to realize
household-specific adaptation and multi-signal fusion without tuning the
embeddings or training a fusion network. Experiments on the VoxCeleb dataset
demonstrate that our approach consistently improves the performance across
households with different customer cohorts and degrees of confusability.Comment: To appear in Interspeech 2022. arXiv admin note: text overlap with
arXiv:2106.0820
Robust Speaker Recognition Using Speech Enhancement And Attention Model
In this paper, a novel architecture for speaker recognition is proposed by
cascading speech enhancement and speaker processing. Its aim is to improve
speaker recognition performance when speech signals are corrupted by noise.
Instead of individually processing speech enhancement and speaker recognition,
the two modules are integrated into one framework by a joint optimisation using
deep neural networks. Furthermore, to increase robustness against noise, a
multi-stage attention mechanism is employed to highlight the speaker related
features learned from context information in time and frequency domain. To
evaluate speaker identification and verification performance of the proposed
approach, we test it on the dataset of VoxCeleb1, one of mostly used benchmark
datasets. Moreover, the robustness of our proposed approach is also tested on
VoxCeleb1 data when being corrupted by three types of interferences, general
noise, music, and babble, at different signal-to-noise ratio (SNR) levels. The
obtained results show that the proposed approach using speech enhancement and
multi-stage attention models outperforms two strong baselines not using them in
most acoustic conditions in our experiments.Comment: Acceptted by Odyssey 202
Speaker diarization of multi-party conversations using participants role information: political debates and professional meetings
Speaker Diarization aims at inferring who spoke when in an audio stream and involves two simultaneous unsupervised tasks: (1) the estimation of the number of speakers, and (2) the association of speech segments to each speaker. Most of the recent efforts in the domain have addressed the problem using machine learning techniques or statistical methods (for a review see [11]) ignoring the fact that the data consists of instances of human conversations
- …