4 research outputs found

    A structured speech model with continuous hidden dynamics and prediction-residual training for tracking vocal tract resonances

    No full text
    A novel approach is developed for efficient and accurate tracking of vocal tract resonances, which are natural frequencies of the resonator from larynx to lips, in fluent speech. The tracking algorithm is based on a version of the structured speech model consisting of continuous-valued hidden dynamics and a piecewise-linearized prediction function from resonance frequencies and bandwidths to LPC cepstra. We present details of the piecewise linearization design process and an adaptive training technique for the parameters that characterize the prediction residuals. An iterative tracking algorithm is described and evaluated that embeds both the prediction-residual training and the piecewise linearization design in an adaptive Kalman filtering framework. Experiments on tracking vocal tract resonances in Switchboard speech data demonstrate high accuracy in the results, as well as the effectiveness of residual training embedded in the algorithm. Our approach differs from traditional formant trackers in that it provides meaningful results even during consonantal closures when the supra-laryngeal source may cause no spectral prominences in speech acoustics. 1

    ARTICULATORY INFORMATION FOR ROBUST SPEECH RECOGNITION

    Get PDF
    Current Automatic Speech Recognition (ASR) systems fail to perform nearly as good as human speech recognition performance due to their lack of robustness against speech variability and noise contamination. The goal of this dissertation is to investigate these critical robustness issues, put forth different ways to address them and finally present an ASR architecture based upon these robustness criteria. Acoustic variations adversely affect the performance of current phone-based ASR systems, in which speech is modeled as `beads-on-a-string', where the beads are the individual phone units. While phone units are distinctive in cognitive domain, they are varying in the physical domain and their variation occurs due to a combination of factors including speech style, speaking rate etc.; a phenomenon commonly known as `coarticulation'. Traditional ASR systems address such coarticulatory variations by using contextualized phone-units such as triphones. Articulatory phonology accounts for coarticulatory variations by modeling speech as a constellation of constricting actions known as articulatory gestures. In such a framework, speech variations such as coarticulation and lenition are accounted for by gestural overlap in time and gestural reduction in space. To realize a gesture-based ASR system, articulatory gestures have to be inferred from the acoustic signal. At the initial stage of this research an initial study was performed using synthetically generated speech to obtain a proof-of-concept that articulatory gestures can indeed be recognized from the speech signal. It was observed that having vocal tract constriction trajectories (TVs) as intermediate representation facilitated the gesture recognition task from the speech signal. Presently no natural speech database contains articulatory gesture annotation; hence an automated iterative time-warping architecture is proposed that can annotate any natural speech database with articulatory gestures and TVs. Two natural speech databases: X-ray microbeam and Aurora-2 were annotated, where the former was used to train a TV-estimator and the latter was used to train a Dynamic Bayesian Network (DBN) based ASR architecture. The DBN architecture used two sets of observation: (a) acoustic features in the form of mel-frequency cepstral coefficients (MFCCs) and (b) TVs (estimated from the acoustic speech signal). In this setup the articulatory gestures were modeled as hidden random variables, hence eliminating the necessity for explicit gesture recognition. Word recognition results using the DBN architecture indicate that articulatory representations not only can help to account for coarticulatory variations but can also significantly improve the noise robustness of ASR system

    Making accurate formant measurements: an empirical investigation of the influence of the measurement tool, analysis settings and speaker on formant measurements

    Get PDF
    The aim of this thesis is to provide guidance and information that will assist forensic speech scientists, and phoneticians generally, in making more accurate formant measurements, using commonly available speech analysis tools. Formant measurements are an important speech feature that are often examined in forensic casework, and are used widely in many other areas within the field of phonetics. However, the performance of software currently used by analysts has not been subject to detailed investigation. This thesis reports on a series of experiments that examine the influence that the analysis tools, analysis settings and speakers have on formant measurements. The influence of these three factors was assessed by examining formant measurement errors and their behaviour. This was done using both synthetic and real speech. The synthetic speech was generated with known formant values so that the measurement errors could be calculated precisely. To investigate the influence of different speakers on measurement performance, synthetic speakers were created with different third formant structures and with different glottal source signals. These speakers’ synthetic vowels were analysed using Praat’s normal formant measuring tool across a range of LPC orders. The real speech was from a subset of 186 speakers from the TIMIT corpus. The measurements from these speakers were compared with a set of hand-corrected reference formant values to establish the performance of four measurement tools across a range of analysis parameters and measurement strategies. The analysis of the measurement errors explored the relationships between the analysis tools, the analysis parameters and the speakers, and also examined how the errors varied over the vowel space. LPC order was found to have the greatest influence on the magnitude of the errors and their overall behaviour was closely associated with the underlying measurement process used by the tools. The performance of the formant trackers tended to be better than the simple Praat measuring tool, and allowing the LPC order to vary across tokens improved the performance for all tools. The performance was found to differ across speakers, and for each real speaker, the best performance was obtained when the measurements were made with a range of LPC orders, rather than being restricted to just one. The most significant guidance that arises from the results is that analysts should have an understanding of the basis of LPC analysis and know how it is applied to obtain formant measurements in the software that they use. They should also understand the influence of LPC order and the other analysis parameters concerning formant tracking. This will enable them to select the most appropriate settings and avoid making unreliable measurements
    corecore