53 research outputs found

    Combining Spectral Representations for Large Vocabulary Continuous Speech Recognition

    Get PDF
    In this paper we investigate the combination of complementary acoustic feature streams in large vocabulary continuous speech recognition (LVCSR). We have explored the use of acoustic features obtained using a pitch-synchronous analysis, STRAIGHT, in combination with conventional features such as mel frequency cepstral coefficients. Pitch-synchronous acoustic features are of particular interest when used with vocal tract length normalisation (VTLN) which is known to be affected by the fundamental frequency. We have combined these spectral representations directly at the acoustic feature level using heteroscedastic linear discriminant analysis (HLDA) and at the system level using ROVER. We evaluated this approach on three LVCSR tasks: dictated newspaper text (WSJCAM0), conversational telephone speech (CTS), and multiparty meeting transcription. The CTS and meeting transcription experiments were both evaluated using standard NIST test sets and evaluation protocols. Our results indicate that combining conventional and pitch-synchronous acoustic feature sets using HLDA results in a consistent, significant decrease in word error rate across all three tasks. Combining at the system level using ROVER resulted in a further significant decrease in word error rate

    Physiologically-Motivated Feature Extraction Methods for Speaker Recognition

    Get PDF
    Speaker recognition has received a great deal of attention from the speech community, and significant gains in robustness and accuracy have been obtained over the past decade. However, the features used for identification are still primarily representations of overall spectral characteristics, and thus the models are primarily phonetic in nature, differentiating speakers based on overall pronunciation patterns. This creates difficulties in terms of the amount of enrollment data and complexity of the models required to cover the phonetic space, especially in tasks such as identification where enrollment and testing data may not have similar phonetic coverage. This dissertation introduces new features based on vocal source characteristics intended to capture physiological information related to the laryngeal excitation energy of a speaker. These features, including RPCC, GLFCC and TPCC, represent the unique characteristics of speech production not represented in current state-of-the-art speaker identification systems. The proposed features are evaluated through three experimental paradigms including cross-lingual speaker identification, cross song-type avian speaker identification and mono-lingual speaker identification. The experimental results show that the proposed features provide information about speaker characteristics that is significantly different in nature from the phonetically-focused information present in traditional spectral features. The incorporation of the proposed glottal source features offers significant overall improvement to the robustness and accuracy of speaker identification tasks

    Discrimination parole/musique et étude de nouveaux paramètres et modèles pour un système d'identification du locuteur dans le contexte de conférences téléphoniques

    Get PDF
    La mise en oeuvre de systèmes de compréhension automatique de parole pouvant fonctionner dans des conditions réelles implique de reproduire certaines aptitudes de l'être humain. Outre les aptitudes à comprendre la parole même lorsqu'elle est corrompue par du bruit, nous sommes capables de tenir une conversation impliquant plusieurs interlocuteurs. Ce dernier point est lié au fait que nous identifions implicitement les interlocuteurs. Cette caractérisation du locuteur nous permet par exemple de réaliser des conversations téléphoniques en mode conférence. En plus de la reconnaissance du vocabulaire ou de l'identification du locuteur, on est également capable de distinguer les séquences de la musique (en alternance, en arrière plan, etc.) qui peuvent apparaître lorsqu'un des correspondants se place en mode attente. En partant de ce contexte, on s'est intéressé à développer un système capable d'une part de discriminer entre les séquences de Parole/Musique et d'autre part d'identifier le locuteur dans des conditions téléphoniques fonctionnant en mode conférence avec une variabilité des combinés. Autrement dit, cette thèse s'intéresse à deux sujets du domaine du traitement de la parole. Le premier sujet porte sur la recherche de nouveaux paramètres pour améliorer les performances des algorithmes qui identifient les locuteurs en mode téléphonique. Le deuxième sujet est consacré à la proposition de nouvelles approches en discrimination de la parole, de la musique et de la musique chantée. En discrimination du locuteur, on présentera une première étude visant à caractériser le locuteur par des paramètres AM-FM synchrones à la glotte, extraits à la sortie d'un banc de filtres cochléaires. L'objectif visé est de trouver de nouveaux paramètres plus robustes aux bruits et à la variabilité des combinés téléphoniques. Comme résultats, on a obtenu des scores presque similaires entre le système proposé et le système de référence. Les meilleures performances ont été enregistrées lorsque le système utilise une architecture parallèle composée de deux reconnaisseurs qui se basent respectivement sur les paramètres MFCC et AM-FM. Dans le même cadre, on s'est intéressé à proposer une nouvelle technique de modélisation qui tient compte de la dépendance temporelle entre la source d'excitation et le conduit vocal. Avec les tests de courtes durées, on a obtenu de meilleures performances en comparaison à l'approche classique. Cependant, quand on augmente la durée de test, on obtient presque les mêmes performances pour tous les systèmes proposés. En discrimination Parole/Musique, on a proposé deux systèmes, le premier utilise trois modèles paramétriques entraînés respectivement pour la parole, la musique et la musique chantée sans effectuer aucune normalisation sur les vecteurs paramètres. Sur une durée test de 100 ms, on a obtenu un taux de reconnaissance en moyenne de 93,77%. Le deuxième système ne requiert aucun entraînement et se base simplement sur un seuil pour effectuer la classification

    Glottal-synchronous speech processing

    No full text
    Glottal-synchronous speech processing is a field of speech science where the pseudoperiodicity of voiced speech is exploited. Traditionally, speech processing involves segmenting and processing short speech frames of predefined length; this may fail to exploit the inherent periodic structure of voiced speech which glottal-synchronous speech frames have the potential to harness. Glottal-synchronous frames are often derived from the glottal closure instants (GCIs) and glottal opening instants (GOIs). The SIGMA algorithm was developed for the detection of GCIs and GOIs from the Electroglottograph signal with a measured accuracy of up to 99.59%. For GCI and GOI detection from speech signals, the YAGA algorithm provides a measured accuracy of up to 99.84%. Multichannel speech-based approaches are shown to be more robust to reverberation than single-channel algorithms. The GCIs are applied to real-world applications including speech dereverberation, where SNR is improved by up to 5 dB, and to prosodic manipulation where the importance of voicing detection in glottal-synchronous algorithms is demonstrated by subjective testing. The GCIs are further exploited in a new area of data-driven speech modelling, providing new insights into speech production and a set of tools to aid deployment into real-world applications. The technique is shown to be applicable in areas of speech coding, identification and artificial bandwidth extension of telephone speec

    A Parametric Sound Object Model for Sound Texture Synthesis

    Get PDF
    This thesis deals with the analysis and synthesis of sound textures based on parametric sound objects. An overview is provided about the acoustic and perceptual principles of textural acoustic scenes, and technical challenges for analysis and synthesis are considered. Four essential processing steps for sound texture analysis are identifi ed, and existing sound texture systems are reviewed, using the four-step model as a guideline. A theoretical framework for analysis and synthesis is proposed. A parametric sound object synthesis (PSOS) model is introduced, which is able to describe individual recorded sounds through a fi xed set of parameters. The model, which applies to harmonic and noisy sounds, is an extension of spectral modeling and uses spline curves to approximate spectral envelopes, as well as the evolution of parameters over time. In contrast to standard spectral modeling techniques, this representation uses the concept of objects instead of concatenated frames, and it provides a direct mapping between sounds of diff erent length. Methods for automatic and manual conversion are shown. An evaluation is presented in which the ability of the model to encode a wide range of di fferent sounds has been examined. Although there are aspects of sounds that the model cannot accurately capture, such as polyphony and certain types of fast modulation, the results indicate that high quality synthesis can be achieved for many different acoustic phenomena, including instruments and animal vocalizations. In contrast to many other forms of sound encoding, the parametric model facilitates various techniques of machine learning and intelligent processing, including sound clustering and principal component analysis. Strengths and weaknesses of the proposed method are reviewed, and possibilities for future development are discussed
    • …
    corecore