1,496 research outputs found

    Surface electromyographic control of a novel phonemic interface for speech synthesis

    Full text link
    Many individuals with minimal movement capabilities use AAC to communicate. These individuals require both an interface with which to construct a message (e.g., a grid of letters) and an input modality with which to select targets. This study evaluated the interaction of two such systems: (a) an input modality using surface electromyography (sEMG) of spared facial musculature, and (b) an onscreen interface from which users select phonemic targets. These systems were evaluated in two experiments: (a) participants without motor impairments used the systems during a series of eight training sessions, and (b) one individual who uses AAC used the systems for two sessions. Both the phonemic interface and the electromyographic cursor show promise for future AAC applications.F31 DC014872 - NIDCD NIH HHS; R01 DC002852 - NIDCD NIH HHS; R01 DC007683 - NIDCD NIH HHS; T90 DA032484 - NIDA NIH HHShttps://www.ncbi.nlm.nih.gov/pubmed/?term=Surface+electromyographic+control+of+a+novel+phonemic+interface+for+speech+synthesishttps://www.ncbi.nlm.nih.gov/pubmed/?term=Surface+electromyographic+control+of+a+novel+phonemic+interface+for+speech+synthesisPublished versio

    Deep Learning for Processing Electromyographic Signals: a Taxonomy-based Survey

    Get PDF
    Deep Learning (DL) has been recently employed to build smart systems that perform incredibly well in a wide range of tasks, such as image recognition, machine translation, and self-driving cars. In several fields the considerable improvement in the computing hardware and the increasing need for big data analytics has boosted DL work. In recent years physiological signal processing has strongly benefited from deep learning. In general, there is an exponential increase in the number of studies concerning the processing of electromyographic (EMG) signals using DL methods. This phenomenon is mostly explained by the current limitation of myoelectric controlled prostheses as well as the recent release of large EMG recording datasets, e.g. Ninapro. Such a growing trend has inspired us to seek and review recent papers focusing on processing EMG signals using DL methods. Referring to the Scopus database, a systematic literature search of papers published between January 2014 and March 2019 was carried out, and sixty-five papers were chosen for review after a full text analysis. The bibliometric research revealed that the reviewed papers can be grouped in four main categories according to the final application of the EMG signal analysis: Hand Gesture Classification, Speech and Emotion Classification, Sleep Stage Classification and Other Applications. The review process also confirmed the increasing trend in terms of published papers, the number of papers published in 2018 is indeed four times the amount of papers published the year before. As expected, most of the analyzed papers (≈60 %) concern the identification of hand gestures, thus supporting our hypothesis. Finally, it is worth reporting that the convolutional neural network (CNN) is the most used topology among the several involved DL architectures, in fact, the sixty percent approximately of the reviewed articles consider a CNN

    Acoustic, myoelectric, and aerodynamic parameters of euphonic and dysphonic voices: a systematic review of clinical studies

    Get PDF
    At present, there is no clinical consensus on the concept of normal and dysphonic voices. For many years, the establishment of a consensus on the terminology related to normal and pathological voices has been studied, in order to facilitate the communication between professionals in the field of the voice. Aim: systematically review the literature to compare and learn more precisely the measurable and objective characteristics of the acoustic, aerodynamic and surface electromyographic parameters of the normal and dysphonic voices. Methods: The PRISMA 2020 methodology was used as a review protocol together with the PICO procedure to answer the research question through six databases. Results: In total, 467 articles were found. After duplicate records were removed from the selection, the inclusion and exclusion criteria were applied and 19 articles were eligible. A qualitative synthesis of the included studies is presented in terms of their methodology and results. Conclusions: Studying the acoustic, aerodynamic, and electromyographic parameters with more precision, in both normal and dysphonic voices, will allow health professionals working in the field of voice (speech therapy, otorhinolaryngology, phoniatrics, etc.) to establish a diagnostic and detailed consensus of the vocal pathology, enhancing the communication and generalization of results worldwide

    EMG-to-Speech: Direct Generation of Speech from Facial Electromyographic Signals

    Get PDF
    The general objective of this work is the design, implementation, improvement and evaluation of a system that uses surface electromyographic (EMG) signals and directly synthesizes an audible speech output: EMG-to-speech

    The Method of Indirect Restoration of Human Communicative Function

    Get PDF
    The substantiation of the method of indirect restoration of human communicative function with using the specialized technical means, is carried out. It has been established, that indirectly, the communicative function can be restored by proper processing of the electroencephalographic and electromyographic signals, that arise during the implementation of this function

    Velum movement detection based on surface electromyography for speech interface

    Get PDF
    Conventional speech communication systems do not perform well in the absence of an intelligible acoustic signal. Silent Speech Interfaces enable speech communication to take place with speech-handicapped users and in noisy environments. However, since no acoustic signal is available, information on nasality may be absent, which is an important and relevant characteristic of several languages, particularly European Portuguese. In this paper we propose a non-invasive method - surface Electromyography (EMG) electrodes - positioned in the face and neck regions to explore the existence of useful information about the velum movement. The applied procedure takes advantage of Real-Time Magnetic Resonance Imaging (RT-MRI) data, collected from the same speakers, to interpret and validate EMG data. By ensuring compatible scenario conditions and proper alignment between the EMG and RT-MRI data, we are able to estimate when the velum moves and the probable type of movement under a nasality occurrence. Overall results of this experiment revealed interesting and distinct characteristics in the EMG signal when a nasal vowel is uttered and that it is possible to detect velum movement, particularly by sensors positioned below the ear between the mastoid process and the mandible in the upper neck region.info:eu-repo/semantics/publishedVersio

    Speech Communication

    Get PDF
    Contains research objectives and summary of research on three research projects and reports on three research projects.National Institutes of Health (Grant 5 RO1 NS04332-12)U. S. Navy Office of Naval Research (Contract ONR N00014-67-A-0204-0069)Joint Services Electronics Program (Contract DAAB07-74-C-0630)National Institutes of Health (Grant 2 RO1 NS04332-11
    corecore