3 research outputs found
Dysarthric speech analysis and automatic recognition using phase based representations
Dysarthria is a neurological speech impairment which usually results in the loss of motor speech control due to muscular atrophy and poor coordination of articulators. Dysarthric speech is more difficult to model with machine learning algorithms, due to inconsistencies in the acoustic signal and to limited amounts of training data. This study reports a new approach for the analysis and representation of dysarthric speech, and applies it to improve ASR performance.
The Zeros of Z-Transform (ZZT) are investigated for dysarthric vowel segments. It shows evidence of a phase-based acoustic phenomenon that is responsible for the way the distribution of zero patterns relate to speech intelligibility. It is investigated whether such phase-based artefacts can be systematically exploited to understand their association with intelligibility.
A metric based on the phase slope deviation (PSD) is introduced that are observed in the unwrapped phase spectrum of dysarthric vowel segments. The metric compares the differences between the slopes of dysarthric vowels and typical vowels. The PSD shows a strong and nearly linear correspondence with the intelligibility of the speaker, and it is shown to hold for two separate databases of dysarthric speakers. A systematic procedure for correcting the underlying phase deviations results in a significant improvement in ASR performance for speakers with severe and moderate dysarthria.
In addition, information encoded in the phase component of the Fourier transform of dysarthric speech is exploited in the group delay spectrum. Its properties are found to represent disordered speech more effectively than the magnitude spectrum. Dysarthric ASR performance was significantly improved using phase-based cepstral features in comparison to the conventional MFCCs. A combined approach utilising the benefits of PSD corrections and phase-based features was found to surpass all the previous performance on the UASPEECH database of dysarthric speech
Segmental Eigenvoice With Delicate Eigenspace for Improved Speaker Adaptation
Eigenvoice techniques have been proposed to provide
rapid speaker adaptation with very limited adaptation data,
but the performance may be saturated when more adaptation
data become available. This is because in these techniques an
eigenspace with reduced dimensionality is established by properly
utilizing the a priori knowledge from the large quantity of training
data. The reduced dimensionality of the eigenspace requires
less adaptation data to estimate the model parameters for the
new speaker, but also makes it less easy to obtain more precise
models with more adaptation data. In this paper, a new segmental
eigenvoice approach is proposed, in which the eigenspace can be
further segmented into N subeigenspaces by properly classifying
the model parameters into N clusters. These N subeigenspaces
can help to construct a more delicate eigenspace and more precise
models when more adaptation data are available. It will be shown
that there can be at least mixture-based, model-based and feature-
based segmental eigenvoice approaches. Not only improved
performance can be obtained, but these different approaches can
be properly integrated to offer better performance. Two further
approaches leading to improved segmental eigenvoice techniques
with even better performance are also proposed. The experiments
were performed with b