4 research outputs found

    Adaptive speaker diarization of broadcast news based on factor analysis

    Get PDF
    The introduction of factor analysis techniques in a speaker diarization system enhances its performance by facilitating the use of speaker specific information, by improving the suppression of nuisance factors such as phonetic content, and by facilitating various forms of adaptation. This paper describes a state-of-the-art iVector-based diarization system which employs factor analysis and adaptation on all levels. The diarization modules relevant for this work are: the speaker segmentation which searches for speaker boundaries and the speaker clustering which aims at grouping speech segments of the same speaker. The speaker segmentation relies on speaker factors which are extracted on a frame-by-frame basis using eigenvoices. We incorporate soft voice activity detection in this extraction process as the speaker change detection should be based on speaker information only and we want it to disregard the non-speech frames by applying speech posteriors. Potential speaker boundaries are inserted at positions where rapid changes in speaker factors are witnessed. By employing Mahalanobis distances, the effect of the phonetic content can be further reduced, which results in more accurate speaker boundaries. This iVector-based segmentation significantly outperforms more common segmentation methods based on the Bayesian Information Criterion (BIC) or speech activity marks. The speaker clustering employs two-step Agglomerative Hierarchical Clustering (AHC): after initial BIC clustering, the second cluster stage is realized by either an iVector Probabilistic Linear Discriminant Analysis (PLDA) system or Cosine Distance Scoring (CDS) of extracted speaker factors. The segmentation system is made adaptive on a file-by-file basis by iterating the diarization process using eigenvoice matrices adapted (unsupervised) on the output of the previous iteration. Assuming that for most use cases material similar to the recording in question is readily available, unsupervised domain adaptation of the speaker clustering is possible as well. We obtain this by expanding the eigenvoice matrix used during speaker factor extraction for the CDS clustering stage with a small set of new eigenvoices that, in combination with the initial generic eigenvoices, models the recurring speakers and acoustic conditions more accurately. Experiments on the COST278 multilingual broadcast news database show the generation of significantly more accurate speaker boundaries by using adaptive speaker segmentation which also results in more accurate clustering. The obtained speaker error rate (SER) can be further reduced by another 13% relative to 7.4% via domain adaptation of the CDS clustering. (C) 2017 Elsevier Ltd. All rights reserved

    Model-based speech/non-speech segmentation of a heterogeneous multilingual TV broadcast collection

    No full text
    Multimedia Information Retrieval systems normally comprise a preprocessor that performs a speech/non-speech (SNS) segmentation of the audio stream. The goal of such a segmentation is to divide the audio into intervals that need a lexical transcription and intervals that just need some categorization in terms of jingle, applause, etc. In this paper a baseline SNS system that was trained on monolingual BN data is evaluated on a multilingual BN corpus and on a heterogeneous corpus, composed of diverse TV shows including discussions, soaps, animation films, etc. It appears that the system exhibits serious deficiencies when confronted with such out-of-domain data. Especially the heterogeneous corpus, characterized by many short speaker turns and a rich pallet of non-speech intervals, turns out to be challenging. However, employing a proper SNS information criterion, it is demonstrated that enhancing the acoustic representation of the audio, creating a richer music model and performing a file-wise adaptation of the acoustic models can significantly increase the performance. Complex architectures permitting explicit duration modeling and re-segmentation of the speech parts after speaker change detection on the other hand do not seem to help
    corecore