52 research outputs found

    Physiologically-Motivated Feature Extraction Methods for Speaker Recognition

    Get PDF
    Speaker recognition has received a great deal of attention from the speech community, and significant gains in robustness and accuracy have been obtained over the past decade. However, the features used for identification are still primarily representations of overall spectral characteristics, and thus the models are primarily phonetic in nature, differentiating speakers based on overall pronunciation patterns. This creates difficulties in terms of the amount of enrollment data and complexity of the models required to cover the phonetic space, especially in tasks such as identification where enrollment and testing data may not have similar phonetic coverage. This dissertation introduces new features based on vocal source characteristics intended to capture physiological information related to the laryngeal excitation energy of a speaker. These features, including RPCC, GLFCC and TPCC, represent the unique characteristics of speech production not represented in current state-of-the-art speaker identification systems. The proposed features are evaluated through three experimental paradigms including cross-lingual speaker identification, cross song-type avian speaker identification and mono-lingual speaker identification. The experimental results show that the proposed features provide information about speaker characteristics that is significantly different in nature from the phonetically-focused information present in traditional spectral features. The incorporation of the proposed glottal source features offers significant overall improvement to the robustness and accuracy of speaker identification tasks

    Improving speaker recognition by biometric voice deconstruction

    Get PDF
    Person identification, especially in critical environments, has always been a subject of great interest. However, it has gained a new dimension in a world threatened by a new kind of terrorism that uses social networks (e.g., YouTube) to broadcast its message. In this new scenario, classical identification methods (such as fingerprints or face recognition) have been forcedly replaced by alternative biometric characteristics such as voice, as sometimes this is the only feature available. The present study benefits from the advances achieved during last years in understanding and modeling voice production. The paper hypothesizes that a gender-dependent characterization of speakers combined with the use of a set of features derived from the components, resulting from the deconstruction of the voice into its glottal source and vocal tract estimates, will enhance recognition rates when compared to classical approaches. A general description about the main hypothesis and the methodology followed to extract the gender-dependent extended biometric parameters is given. Experimental validation is carried out both on a highly controlled acoustic condition database, and on a mobile phone network recorded under non-controlled acoustic conditions

    Models and analysis of vocal emissions for biomedical applications: 5th International Workshop: December 13-15, 2007, Firenze, Italy

    Get PDF
    The MAVEBA Workshop proceedings, held on a biannual basis, collect the scientific papers presented both as oral and poster contributions, during the conference. The main subjects are: development of theoretical and mechanical models as an aid to the study of main phonatory dysfunctions, as well as the biomedical engineering methods for the analysis of voice signals and images, as a support to clinical diagnosis and classification of vocal pathologies. The Workshop has the sponsorship of: Ente Cassa Risparmio di Firenze, COST Action 2103, Biomedical Signal Processing and Control Journal (Elsevier Eds.), IEEE Biomedical Engineering Soc. Special Issues of International Journals have been, and will be, published, collecting selected papers from the conference

    A Comparison Between STRAIGHT, Glottal, an Sinusoidal Vocoding in Statistical Parametric Speech Synthesis

    Get PDF
    Speech is a fundamental method of human communication that allows conveying information between people. Even though the linguistic content is commonly regarded as the main information in speech, the signal contains a richness of other information, such as prosodic cues that shape the intended meaning of a sentence. This information is largely generated by quasi-periodic glottal excitation, which is the acoustic speech excitation airflow originating from the lungs that makes the vocal folds oscillate in the production of voiced speech. By regulating the sub-glottal pressure and the tension of the vocal folds, humans learn to affect the characteristics of the glottal excitation in order to signal the emotional state of the speaker for example. Glottal inverse filtering (GIF) is an estimation method for the glottal excitation of a recorded speech signal. Various cues about the speech signal, such as the mode of phonation, can be detected and analyzed from an estimate of the glottal flow, both instantaneously and as a function of time. Aside from its use in fundamental speech research, such as phonetics, the recent advances in GIF and machine learning enable a wider variety of GIF applications, such as emotional speech synthesis and the detection of paralinguistic information. However, GIF is a difficult inverse problem where the target algorithm output is generally unattainable with direct measurements. Thus the algorithms and their evaluation need to rely on some prior assumptions about the properties of the speech signal. A common thread utilized in most of the studies in this thesis is the estimation of the vocal tract transfer function (the key problem in GIF) by temporally weighting the optimization criterion in GIF so that the effect of the main excitation peak is attenuated. This thesis studies GIF from various perspectives---including the development of two new GIF methods that improve GIF performance over the state-of-the-art methods---and furthers basic research in the automated estimation of glottal excitation. The estimation of the GIF-based vocal tract transfer function for formant tracking and perceptually weighted speech envelope estimation is also studied. The central speech technology application of GIF addressed in the thesis is the use of GIF-based spectral envelope models and glottal excitation waveforms as target training data for the generative neural network models used in statistical parametric speech synthesis. The obtained results show that even though the presented studies provide improvements to the previous methodology for all voice types, GIF-based speech processing continues to mainly benefit male voices in speech synthesis applications.Puhe on olennainen osa ihmistenvälistä informaation siirtoa. Vaikka kielellistä sisältöä pidetään yleisesti puheen tärkeimpänä ominaisuutena, puhesignaali sisältää myös runsaasti muuta informaatiota kuten prosodisia vihjeitä, jotka muokkaavat siirrettävän informaation merkitystä. Tämä informaatio tuotetaan suurilta osin näennäisjaksollisella glottisherätteellä, joka on puheen herätteenä toimiva akustinen virtaussignaali. Säätämällä äänihuulten alapuolista painetta ja äänihuulten kireyttä ihmiset muuttavat glottisherätteen ominaisuuksia viestittääkseen esimerkiksi tunnetilaa. Glottaalinen käänteissuodatus (GKS) on laskennallinen menetelmä glottisherätteen estimointiin nauhoitetusta puhesignaalista. Glottisherätteen perusteella puheen laadusta voidaan tunnistaa useita piirteitä kuten ääntötapa, sekä hetkellisesti että ajan funktiona. Puheen perustutkimuksen, kuten fonetiikan, lisäksi viimeaikaiset edistykset GKS:ssä ja koneoppimisessa ovat avaamassa mahdollisuuksia laajempaan GKS:n soveltamiseen puheteknologiassa, kuten puhesynteesissä ja puheen biopiirteistämisessä paralingvistisiä sovelluksia varten. Haasteena on kuitenkin se, että GKS on vaikea käänteisongelma, jossa todellista puhetta vastaavan glottisherätteen suora mittaus on mahdotonta. Tästä johtuen GKS:ssä käytettävien algoritmien kehitystyö ja arviointi perustuu etukäteisoletuksiin puhesignaalin ominaisuuksista. Tässä väitöskirjassa esitetyissä menetelmissä on yhteisenä oletuksena se, että ääntöväylän siirtofunktio voidaan arvioida (joka on GKS:n pääongelma) aikapainottamalla GKS:n optimointikriteeriä niin, että glottisherätteen pääeksitaatiopiikkin vaikutus vaimenee. Tässä väitöskirjassa GKS:ta tutkitaan useasta eri näkökulmasta, jotka sisältävät kaksi uutta GKS-menetelmää, jotka parantavat arviointituloksia aikaisempiin menetelmiin verrattuna, sekä perustutkimusta käänteissuodatusprosessin automatisointiin liittyen. Lisäksi GKS-pohjaista ääntöväylän siirtofunktiota käytetään formanttiestimoinnissa sekä kuulohavaintopainotettuna versiona puheen spektrin verhokäyrän arvioinnissa. Tämän väitöskirjan keskeisin puheteknologiasovellus on GKS-pohjaisten puheen spektrin verhokäyrämallien sekä glottisheräteaaltomuotojen käyttö kohdedatana neuroverkkomalleille tilastollisessa parametrisessa puhesynteesissä. Saatujen tulosten perusteella kehitetyt menetelmät parantavat GKS-pohjaisten menetelmien laatua kaikilla äänityypeillä, mutta puhesynteesisovelluksissa GKS-pohjaiset ratkaisut hyödyttävät edelleen lähinnä matalia miesääniä

    Models and Analysis of Vocal Emissions for Biomedical Applications

    Get PDF
    The MAVEBA Workshop proceedings, held on a biannual basis, collect the scientific papers presented both as oral and poster contributions, during the conference. The main subjects are: development of theoretical and mechanical models as an aid to the study of main phonatory dysfunctions, as well as the biomedical engineering methods for the analysis of voice signals and images, as a support to clinical diagnosis and classification of vocal pathologies

    Models and Analysis of Vocal Emissions for Biomedical Applications

    Get PDF
    The Models and Analysis of Vocal Emissions with Biomedical Applications (MAVEBA) workshop came into being in 1999 from the particularly felt need of sharing know-how, objectives and results between areas that until then seemed quite distinct such as bioengineering, medicine and singing. MAVEBA deals with all aspects concerning the study of the human voice with applications ranging from the neonate to the adult and elderly. Over the years the initial issues have grown and spread also in other aspects of research such as occupational voice disorders, neurology, rehabilitation, image and video analysis. MAVEBA takes place every two years always in Firenze, Italy
    corecore