141 research outputs found

    Experimental study of nasality with particular reference to Brazilian Portuguese

    Get PDF

    Listener Tolerance of Nasality: A Dialectal and Comparative Perspective

    Get PDF
    1st Prize - "Language in the Mind" Category at 23rd Denman Undergraduate Research ForumPerception of nasality, as the perceptual correlate of degree of nasal resonance (NR) in speech, may be affected by continuous exposure to high levels of NR through oral-aural feedback in speakers with higher-than-average NR whose speech is not necessarily pathologically hypernasal. Such heightened NR may introduce a listener tolerance of nasality for such speakers.No embargoAcademic Major: Speech and Hearing ScienceAcademic Major: Linguistic

    Nasality in automatic speaker verification

    Get PDF

    Histograms of Points, Orientations, and Dynamics of Orientations Features for Hindi Online Handwritten Character Recognition

    Full text link
    A set of features independent of character stroke direction and order variations is proposed for online handwritten character recognition. A method is developed that maps features like co-ordinates of points, orientations of strokes at points, and dynamics of orientations of strokes at points spatially as a function of co-ordinate values of the points and computes histograms of these features from different regions in the spatial map. Different features like spatio-temporal, discrete Fourier transform, discrete cosine transform, discrete wavelet transform, spatial, and histograms of oriented gradients used in other studies for training classifiers for character recognition are considered. The classifier chosen for classification performance comparison, when trained with different features, is support vector machines (SVM). The character datasets used for training and testing the classifiers consist of online handwritten samples of 96 different Hindi characters. There are 12832 and 2821 samples in training and testing datasets, respectively. SVM classifiers trained with the proposed features has the highest classification accuracy of 92.9\% when compared to the performances of SVM classifiers trained with the other features and tested on the same testing dataset. Therefore, the proposed features have better character discriminative capability than the other features considered for comparison.Comment: 21 pages, 12 jpg figure

    A comparative analysis of Chakma and English Vowels

    Get PDF
    This thesis is submitted in partial fulfillment of the requirements for the degree of Bachelor of Arts in English, 2013.Cataloged from PDF version of thesis.Includes bibliographical references (page 37).This study examines that a comparative analysis of English and Chakma vowel phonemes with which different features of these two languages comes out. In this study the similarities and dissimilarities of Chakma vowels with English is the main concern. Chakma vowels are compared to English vowels by description of vowels, diphthongs, phonemic contrast of vowels, vowel length, nasalization, and vowel stressed. The vowels are described articulatory movements of the vowels, position of the vowels within the words etc. Additionally, phonemic contrast of vowels is described through initial, medial and final contrast of vowels. In the last part of the paper, acoustic analysis is given to draw the position of the vowels while articulation in Chakma.S. M. Mohibul HasanB.A. in Englis

    Glottal-Source Spectral Biometry for Voice Characterization

    Get PDF
    The biometric signature derived from the estimation of the power spectral density singularities of a speaker’s glottal source is described in the present work. This consists in the collection of peak-trough profiles found in the spectral density, as related to the biomechanics of the vocal folds. Samples of parameter estimations from a set of 100 normophonic (pathology-free) speakers are produced. Mapping the set of speaker’s samples to a manifold defined by Principal Component Analysis and clustering them by k-means in terms of the most relevant principal components shows the separation of speakers by gender. This means that the proposed signature conveys relevant speaker’s metainformation, which may be useful in security and forensic applications for which contextual side information is considered relevant

    Models and Analysis of Vocal Emissions for Biomedical Applications

    Get PDF
    The MAVEBA Workshop proceedings, held on a biannual basis, collect the scientific papers presented both as oral and poster contributions, during the conference. The main subjects are: development of theoretical and mechanical models as an aid to the study of main phonatory dysfunctions, as well as the biomedical engineering methods for the analysis of voice signals and images, as a support to clinical diagnosis and classification of vocal pathologies

    Glottal flow characteristics in vowels produced by speakers with heart failure

    Get PDF
    Heart failure (HF) is one of the most life-threatening diseases globally. HF is an under-diagnosed condition, and more screening tools are needed to detect it. A few recent studies have suggested that HF also affects the functioning of the speech production mechanism by causing generation of edema in the vocal folds and by impairing the lung function. It has not yet been studied whether these possible effects of HF on the speech production mechanism are large enough to cause acoustically measurable differences to distinguish speech produced in HF from that produced by healthy speakers. Therefore, the goal of the present study was to compare speech production between HF patients and healthy controls by focusing on the excitation signal generated at the level of the vocal folds, the glottal flow. The glottal flow was computed from speech using the quasi-closed phase glottal inverse filtering method and the estimated flow was parameterized with 12 glottal parameters. The sound pressure level (SPL) was measured from speech as an additional parameter. The statistical analyses conducted on the parameters indicated that most of the glottal parameters and SPL were significantly different between the HF patients and healthy controls. The results showed that the HF patients generally produced a more rounded glottal pulse and a lower SPL level compared to the healthy controls, indicating incomplete glottal closure and inappropriate leakage of air through the glottis. The results observed in this preliminary study indicate that glottal features are capable of distinguishing speakers with HF from healthy controls. Therefore, the study suggests that glottal features constitute a potential feature extraction approach which should be taken into account in future large-scale investigations in studying the automatic detection of HF from speech.Peer reviewe

    Analysis, Vocal-tract modeling, and Automatic Detection of Vowel Nasalization

    Get PDF
    The aim of this work is to clearly understand the salient features of nasalization and the sources of acoustic variability in nasalized vowels, and to suggest Acoustic Parameters (APs) for the automatic detection of vowel nasalization based on this knowledge. Possible applications in automatic speech recognition, speech enhancement, speaker recognition and clinical assessment of nasal speech quality have made the detection of vowel nasalization an important problem to study. Although several researchers in the past have found a number of acoustical and perceptual correlates of nasality, automatically extractable APs that work well in a speaker-independent manner are yet to be found. In this study, vocal tract area functions for one American English speaker, recorded using Magnetic Resonance Imaging, were used to simulate and analyze the acoustics of vowel nasalization, and to understand the variability due to velar coupling area, asymmetry of nasal passages, and the paranasal sinuses. Based on this understanding and an extensive survey of past literature, several automatically extractable APs were proposed to distinguish between oral and nasalized vowels. Nine APs with the best discrimination capability were selected from this set through Analysis of Variance. The performance of these APs was tested on several databases with different sampling rates, recording conditions and languages. Accuracies of 96.28%, 77.90% and 69.58% were obtained by using these APs on StoryDB, TIMIT and WS96/97 databases, respectively, in a Support Vector Machine classifier framework. To my knowledge, these results are the best anyone has achieved on this task. These APs were also tested in a cross-language task to distinguish between oral and nasalized vowels in Hindi. An overall accuracy of 63.72% was obtained on this task. Further, the accuracy for phonemically nasalized vowels, 73.40%, was found to be much higher than the accuracy of 53.48% for coarticulatorily nasalized vowels. This result suggests not only that the same APs can be used to capture both phonemic and coarticulatory nasalization, but also that the duration of nasalization is much longer when vowels are phonemically nasalized. This language and category independence is very encouraging since it shows that these APs are really capturing relevant information

    Speech Communication

    Get PDF
    Contains table of contents for Part IV, table of contents for Section 1, an introduction, reports on seven research projects and a list of publications.C.J. Lebel FellowshipDennis Klatt Memorial FundNational Institutes of Health Grant T32-DC00005National Institutes of Health Grant R01-DC00075National Institutes of Health Grant F32-DC00015National Institutes of Health Grant R01-DC00266National Institutes of Health Grant P01-DC00361National Institutes of Health Grant R01-DC00776National Science Foundation Grant IRI 89-10561National Science Foundation Grant IRI 88-05680National Science Foundation Grant INT 90-2471
    corecore