4 research outputs found

    Towards a better integration of written names for unsupervised speakers identification in videos

    Get PDF
    International audienceExisting methods for unsupervised identification of speakers in TV broadcast usually rely on the output of a speaker diariza- tion module and try to name each cluster using names provided by another source of information: we call it "late naming". Hence, written names extracted from title blocks tend to lead to high precision identification, although they cannot correct er- rors made during the clustering step. In this paper, we extend our previous "late naming" ap- proach in two ways: "integrated naming" and "early naming". While "late naming" relies on a speaker diarization module op- timized for speaker diarization, "integrated naming" jointly op- timize speaker diarization and name propagation in terms of identification errors. "Early naming" modifies the speaker di- arization module by adding constraints preventing two clusters with different written names to be merged together. While "integrated naming" yields similar identification per- formance as "late naming" (with better precision), "early nam- ing" improves over this baseline both in terms of identification error rate and stability of the clustering stopping criterion

    Extracting true speaker identities from transcriptions

    No full text
    International audienceAutomatic speaker diarization generally produces a generic label such a spkr1 rather than the true identity of the speaker. Recently, two approaches based on lexical rules were proposed to extract the true identity of the speaker from the transcriptions of the audio recording without any a priori acoustic information: one uses n-gram, the other one uses semantic classification trees (SCT). The latter was proposed by the authors of this paper. In this paper, the two methods are compared in experiments carried out on French broadcast news records from the ESTER 2005 evaluation campaign. Experiments are processed on manual and automatic transcriptions. On manual transcriptions, the n-gram-based approach can be more precise, but the automatic transcriptions, the SCT-based approach gives significantly the best results in terms of recall and precision

    Recherche du rôle des intervenants et de leurs interactions pour la structuration de documents audiovisuels

    Get PDF
    Nous présentons un système de structuration automatique d'enregistrements audiovisuels s'appuyant sur des informations non lexicales caractéristiques des rôles des intervenants et de leurs interactions. Dans une première étape, nous proposons une méthode de détection et de caractérisation de séquences temporelles, nommée « zones d'interaction », susceptibles de correspondre à des conversations. La seconde étape de notre système réalise une reconnaissance du rôle des intervenants : présentateur, journaliste et autre. Notre contribution au domaine de la reconnaissance automatique du rôle se distingue en reposant sur l'hypothèse selon laquelle les rôles des intervenants sont accessibles à travers des paramètres « bas-niveau » inscrits d'une part dans l'organisation temporelle des tours de parole des intervenants, dans les environnements acoustiques dans lesquels ils apparaissent, ainsi que dans plusieurs paramètres prosodiques (intonation et débit). Dans une dernière étape, nous combinons l'information du rôle des intervenants à la connaissance des séquences d'interaction afin de produire deux niveaux de description du contenu des documents. Le premier niveau de description segmente les enregistrements en zones de 4 types : informations, entretiens, transition et intermède. Un second niveau de description classe les zones d'interaction orales en 4 catégories : débat, interview, chronique et relais. Chaque étape du système est validée par une grand nombre d'expériences menées sur le corpus du projet EPAC et celui de la campagne d'évaluation ESTER.We present a system for audiovisual document structuring, based-on speaker role recognition and speech interaction zone detection. The first stage of our system consists in an automatic method for speech interaction zones detection and characterization. Such zones correspond to temporal sequences of documents which potentially contain conversations between speakers. The second stage of our system achieves the recognition of speaker roles : anchorman, journalist and other. Our contribution to this domain is based on the hypothesis that cues about speaker roles are available through low-level features extracted from the temporal organization of turn-takings and from acoustic and prosodic features (speech rate and pitch). In the last stage of our system, we combine speaker roles and speech interaction zones to provide two descriptive layers of the audiovisual document contents. The first descriptive layer gathers segments of 4 types : informations, meeting, transition and interlude. The second descriptive layer consists in a classification of speech interaction zones into 4 categories : debate, interview, chronicle and relay. Each step of the system has been evaluated using a large number of experiments realized using the EPAC project and ESTER campaign corpora
    corecore