279 research outputs found
Articulatory and bottleneck features for speaker-independent ASR of dysarthric speech
The rapid population aging has stimulated the development of assistive
devices that provide personalized medical support to the needies suffering from
various etiologies. One prominent clinical application is a computer-assisted
speech training system which enables personalized speech therapy to patients
impaired by communicative disorders in the patient's home environment. Such a
system relies on the robust automatic speech recognition (ASR) technology to be
able to provide accurate articulation feedback. With the long-term aim of
developing off-the-shelf ASR systems that can be incorporated in clinical
context without prior speaker information, we compare the ASR performance of
speaker-independent bottleneck and articulatory features on dysarthric speech
used in conjunction with dedicated neural network-based acoustic models that
have been shown to be robust against spectrotemporal deviations. We report ASR
performance of these systems on two dysarthric speech datasets of different
characteristics to quantify the achieved performance gains. Despite the
remaining performance gap between the dysarthric and normal speech, significant
improvements have been reported on both datasets using speaker-independent ASR
architectures.Comment: to appear in Computer Speech & Language -
https://doi.org/10.1016/j.csl.2019.05.002 - arXiv admin note: substantial
text overlap with arXiv:1807.1094
UNIT-DSR: Dysarthric Speech Reconstruction System Using Speech Unit Normalization
Dysarthric speech reconstruction (DSR) systems aim to automatically convert
dysarthric speech into normal-sounding speech. The technology eases
communication with speakers affected by the neuromotor disorder and enhances
their social inclusion. NED-based (Neural Encoder-Decoder) systems have
significantly improved the intelligibility of the reconstructed speech as
compared with GAN-based (Generative Adversarial Network) approaches, but the
approach is still limited by training inefficiency caused by the cascaded
pipeline and auxiliary tasks of the content encoder, which may in turn affect
the quality of reconstruction. Inspired by self-supervised speech
representation learning and discrete speech units, we propose a Unit-DSR
system, which harnesses the powerful domain-adaptation capacity of HuBERT for
training efficiency improvement and utilizes speech units to constrain the
dysarthric content restoration in a discrete linguistic space. Compared with
NED approaches, the Unit-DSR system only consists of a speech unit normalizer
and a Unit HiFi-GAN vocoder, which is considerably simpler without cascaded
sub-modules or auxiliary tasks. Results on the UASpeech corpus indicate that
Unit-DSR outperforms competitive baselines in terms of content restoration,
reaching a 28.2% relative average word error rate reduction when compared to
original dysarthric speech, and shows robustness against speed perturbation and
noise.Comment: Accepted to ICASSP 202
Visual units and confusion modelling for automatic lip-reading
Automatic lip-reading (ALR) is a challenging task because the visual speech signal is known to be missing some important information, such as voicing. We propose an approach to ALR that acknowledges that this information is missing but assumes that it is substituted or deleted in a systematic way that can be modelled. We describe a system that learns such a model and then incorporates it into decoding, which is realised as a cascade of weighted finite-state transducers. Our results show a small but statistically significant improvement in recognition accuracy. We also investigate the issue of suitable visual units for ALR, and show that visemes are sub-optimal, not but because they introduce lexical ambiguity, but because the reduction in modelling units entailed by their use reduces accuracy
SYNTHESIZING DYSARTHRIC SPEECH USING MULTI-SPEAKER TTS FOR DSYARTHRIC SPEECH RECOGNITION
Dysarthria is a motor speech disorder often characterized by reduced speech intelligibility through slow, uncoordinated control of speech production muscles. Automatic Speech recognition (ASR) systems may help dysarthric talkers communicate more effectively. However, robust dysarthria-specific ASR requires a significant amount of training speech is required, which is not readily available for dysarthric talkers.
In this dissertation, we investigate dysarthric speech augmentation and synthesis methods. To better understand differences in prosodic and acoustic characteristics of dysarthric spontaneous speech at varying severity levels, a comparative study between typical and dysarthric speech was conducted. These characteristics are important components for dysarthric speech modeling, synthesis, and augmentation. For augmentation, prosodic transformation and time-feature masking have been proposed. For dysarthric speech synthesis, this dissertation has introduced a modified neural multi-talker TTS by adding a dysarthria severity level coefficient and a pause insertion model to synthesize dysarthric speech for varying severity levels. In addition, we have extended this work by using a label propagation technique to create more meaningful control variables such as a continuous Respiration, Laryngeal and Tongue (RLT) parameter, even for datasets that only provide discrete dysarthria severity level information. This approach increases the controllability of the system, so we are able to generate more dysarthric speech with a broader range.
To evaluate their effectiveness for synthesis of training data, dysarthria-specific speech recognition was used. Results show that a DNN-HMM model trained on additional synthetic dysarthric speech achieves WER improvement of 12.2% compared to the baseline, and that the addition of the severity level and pause insertion controls decrease WER by 6.5%, showing the effectiveness of adding these parameters. Overall results on the TORGO database demonstrate that using dysarthric synthetic speech to increase the amount of dysarthric-patterned speech for training has a significant impact on the dysarthric ASR systems
An innovative speech-based user interface for smarthomes and IoT solutions to help people with speech and motor disabilities
A better use of the increasing functional capabilities of home automation systems and Internet of Things (IoT) devices to support the needs of users with disability, is the subject of a research project currently conducted by Area Ausili (Assistive Technology Area), a department of Polo Tecnologico Regionale Corte Roncati of the Local Health Trust of Bologna (Italy), in collaboration with AIAS Ausilioteca Assistive Technology (AT) Team. The main aim of the project is to develop experimental low cost systems for environmental control through simplified and accessible user interfaces. Many of the activities are focused on automatic speech recognition and are developed in the framework of the CloudCAST project. In this paper we report on the first technical achievements of the project and discuss future possible developments and applications within and outside CloudCAST
Gammatonegram Representation for End-to-End Dysarthric Speech Processing Tasks: Speech Recognition, Speaker Identification, and Intelligibility Assessment
Dysarthria is a disability that causes a disturbance in the human speech
system and reduces the quality and intelligibility of a person's speech.
Because of this effect, the normal speech processing systems can not work
properly on impaired speech. This disability is usually associated with
physical disabilities. Therefore, designing a system that can perform some
tasks by receiving voice commands in the smart home can be a significant
achievement. In this work, we introduce gammatonegram as an effective method to
represent audio files with discriminative details, which is used as input for
the convolutional neural network. On the other word, we convert each speech
file into an image and propose image recognition system to classify speech in
different scenarios. Proposed CNN is based on the transfer learning method on
the pre-trained Alexnet. In this research, the efficiency of the proposed
system for speech recognition, speaker identification, and intelligibility
assessment is evaluated. According to the results on the UA dataset, the
proposed speech recognition system achieved 91.29% accuracy in
speaker-dependent mode, the speaker identification system acquired 87.74%
accuracy in text-dependent mode, and the intelligibility assessment system
achieved 96.47% accuracy in two-class mode. Finally, we propose a multi-network
speech recognition system that works fully automatically. This system is
located in a cascade arrangement with the two-class intelligibility assessment
system, and the output of this system activates each one of the speech
recognition networks. This architecture achieves an accuracy of 92.3% WRR. The
source code of this paper is available.Comment: 12 pages, 8 figure
- …