145 research outputs found

    Pitch-scaled estimation of simultaneous voiced and turbulence-noise components in speech

    Full text link

    Models and Analysis of Vocal Emissions for Biomedical Applications

    Get PDF
    The Models and Analysis of Vocal Emissions with Biomedical Applications (MAVEBA) workshop came into being in 1999 from the particularly felt need of sharing know-how, objectives and results between areas that until then seemed quite distinct such as bioengineering, medicine and singing. MAVEBA deals with all aspects concerning the study of the human voice with applications ranging from the neonate to the adult and elderly. Over the years the initial issues have grown and spread also in other aspects of research such as occupational voice disorders, neurology, rehabilitation, image and video analysis. MAVEBA takes place every two years always in Firenze, Italy

    Automatic prosodic analysis for computer aided pronunciation teaching

    Get PDF
    Correct pronunciation of spoken language requires the appropriate modulation of acoustic characteristics of speech to convey linguistic information at a suprasegmental level. Such prosodic modulation is a key aspect of spoken language and is an important component of foreign language learning, for purposes of both comprehension and intelligibility. Computer aided pronunciation teaching involves automatic analysis of the speech of a non-native talker in order to provide a diagnosis of the learner's performance in comparison with the speech of a native talker. This thesis describes research undertaken to automatically analyse the prosodic aspects of speech for computer aided pronunciation teaching. It is necessary to describe the suprasegmental composition of a learner's speech in order to characterise significant deviations from a native-like prosody, and to offer some kind of corrective diagnosis. Phonological theories of prosody aim to describe the suprasegmental composition of speech..

    Making accurate formant measurements: an empirical investigation of the influence of the measurement tool, analysis settings and speaker on formant measurements

    Get PDF
    The aim of this thesis is to provide guidance and information that will assist forensic speech scientists, and phoneticians generally, in making more accurate formant measurements, using commonly available speech analysis tools. Formant measurements are an important speech feature that are often examined in forensic casework, and are used widely in many other areas within the field of phonetics. However, the performance of software currently used by analysts has not been subject to detailed investigation. This thesis reports on a series of experiments that examine the influence that the analysis tools, analysis settings and speakers have on formant measurements. The influence of these three factors was assessed by examining formant measurement errors and their behaviour. This was done using both synthetic and real speech. The synthetic speech was generated with known formant values so that the measurement errors could be calculated precisely. To investigate the influence of different speakers on measurement performance, synthetic speakers were created with different third formant structures and with different glottal source signals. These speakers’ synthetic vowels were analysed using Praat’s normal formant measuring tool across a range of LPC orders. The real speech was from a subset of 186 speakers from the TIMIT corpus. The measurements from these speakers were compared with a set of hand-corrected reference formant values to establish the performance of four measurement tools across a range of analysis parameters and measurement strategies. The analysis of the measurement errors explored the relationships between the analysis tools, the analysis parameters and the speakers, and also examined how the errors varied over the vowel space. LPC order was found to have the greatest influence on the magnitude of the errors and their overall behaviour was closely associated with the underlying measurement process used by the tools. The performance of the formant trackers tended to be better than the simple Praat measuring tool, and allowing the LPC order to vary across tokens improved the performance for all tools. The performance was found to differ across speakers, and for each real speaker, the best performance was obtained when the measurements were made with a range of LPC orders, rather than being restricted to just one. The most significant guidance that arises from the results is that analysts should have an understanding of the basis of LPC analysis and know how it is applied to obtain formant measurements in the software that they use. They should also understand the influence of LPC order and the other analysis parameters concerning formant tracking. This will enable them to select the most appropriate settings and avoid making unreliable measurements

    COMPARING ACOUSTIC GLOTTAL FEATURE EXTRACTION METHODS WITH SIMULTANEOUSLY RECORDED HIGH-SPEED VIDEO FEATURES FOR CLINICALLY OBTAINED DATA

    Get PDF
    Accurate methods for glottal feature extraction include the use of high-speed video imaging (HSVI). There have been previous attempts to extract these features with the acoustic recording. However, none of these methods compare their results with an objective method, such as HSVI. This thesis tests these acoustic methods against a large diverse population of 46 subjects. Two previously studied acoustic methods, as well as one introduced in this thesis, were compared against two video methods, area and displacement for open quotient (OQ) estimation. The area comparison proved to be somewhat ambiguous and challenging due to thresholding eïŹ€ects. The displacement comparison, which is based on glottal edge tracking, proved to be a more robust comparison method than the area. The ïŹrst acoustic methods OQ estimate had a relatively small average error of 8.90% and the second method had a relatively large average error of -59.05% compared to the displacement OQ. The newly proposed method had a relatively small error of -13.75% when compared to the displacements OQ. There was some success even though there was relatively high error with the acoustic methods, however, they may be utilized to augment the features collected by HSVI for a more accurate glottal feature estimation

    Experimental Advancements in Speech Signal Reduction Techniques

    Get PDF
    Electrical Engineerin

    Glottal-synchronous speech processing

    No full text
    Glottal-synchronous speech processing is a field of speech science where the pseudoperiodicity of voiced speech is exploited. Traditionally, speech processing involves segmenting and processing short speech frames of predefined length; this may fail to exploit the inherent periodic structure of voiced speech which glottal-synchronous speech frames have the potential to harness. Glottal-synchronous frames are often derived from the glottal closure instants (GCIs) and glottal opening instants (GOIs). The SIGMA algorithm was developed for the detection of GCIs and GOIs from the Electroglottograph signal with a measured accuracy of up to 99.59%. For GCI and GOI detection from speech signals, the YAGA algorithm provides a measured accuracy of up to 99.84%. Multichannel speech-based approaches are shown to be more robust to reverberation than single-channel algorithms. The GCIs are applied to real-world applications including speech dereverberation, where SNR is improved by up to 5 dB, and to prosodic manipulation where the importance of voicing detection in glottal-synchronous algorithms is demonstrated by subjective testing. The GCIs are further exploited in a new area of data-driven speech modelling, providing new insights into speech production and a set of tools to aid deployment into real-world applications. The technique is shown to be applicable in areas of speech coding, identification and artificial bandwidth extension of telephone speec
    • 

    corecore