5 research outputs found

    Adaptations in equine appendicular muscle activity and movement occur during induced fore- and hindlimb lameness: An electromyographic and kinematic evaluation

    Get PDF
    The relationship between lameness-related adaptations in equine appendicular motion and muscle activation is poorly understood and has not been studied objectively. The aim of this study was to compare muscle activity of selected fore- and hindlimb muscles, and movement of the joints they act on, between baseline and induced forelimb (iFL) and hindlimb (iHL) lameness. Three-dimensional kinematic data and surface electromyography (sEMG) data from the fore- (triceps brachii, latissimus dorsi) and hindlimbs (superficial gluteal, biceps femoris, semitendinosus) were bilaterally and synchronously collected from clinically non-lame horses ( n = 8) trotting over-ground (baseline). Data collections were repeated during iFL and iHL conditions (2-3/5 AAEP), induced on separate days using a modified horseshoe. Motion asymmetry parameters and continuous joint and pro-retraction angles for each limb were calculated from kinematic data. Normalized average rectified value (ARV) and muscle activation onset, offset and activity duration were calculated from sEMG signals. Mixed model analysis and statistical parametric mapping, respectively, compared discrete and continuous variables between conditions (α= 0.05). Asymmetry parameters reflected the degree of iFL and iHL. Increased ARV occurred across muscles following iFL and iHL, except non-lame side forelimb muscles that significantly decreased following iFL. Significant, limb-specific changes in sEMG ARV, and activation timings reflected changes in joint angles and phasic shifts of the limb movement cycle following iFL and iHL. Muscular adaptations during iFL and iHL are detectable using sEMG and primarily involve increased bilateral activity and phasic activation shifts that reflect known compensatory movement patterns for reducing weightbearing on the lame limb. With further research and development, sEMG may provide a valuable diagnostic aid for quantifying the underlying neuromuscular adaptations to equine lameness, which are undetectable through human observation alone

    EMG-to-Speech: Direct Generation of Speech from Facial Electromyographic Signals

    Get PDF
    The general objective of this work is the design, implementation, improvement and evaluation of a system that uses surface electromyographic (EMG) signals and directly synthesizes an audible speech output: EMG-to-speech

    Advances in materials strategies, circuit designs, and informatics for wearable, flexible and stretchable electronics with medical and robotic applications

    Get PDF
    The future of medical electronics should be flexible, stretchable and skin-integrated. While modern electronics become increasing smaller, faster and energy efficient, the designs remain bulky and rigid due to materials and processing limitations. The miniaturization of health monitoring devices in wearable form resembles a significant progress towards the next-generation medical electronics. However, there are still key challenges in these wearable electronics associated with medical-grade sensing precision, reliable wireless powering, and materials strategy for skin-integration. Here, I present a series of systematic studies from materials strategies, circuit design to signal processing on skin-mounted electronic wearable devices. Several types of Epidermal Electronic Systems (EES) develop applications in dermatology, cardiology, rehabilitation, and wireless powering. For skin hydration measurement, fundamental studies of electrode configurations and skin-electrode impedance reveal the optimal sensor design. Furthermore, wireless operation of hydration sensor was made possible with direct integration on skin, and on porous substrates that collect and analyze sweats. Additionally, I present an epidermal multi-functional sensing platform that could provide a control-feedback loop through electromyogram and current stimulation; and a mechano-acoustic device that could capture vibrations from muscle, heart, and throat as diagnostic tools or human-machine interface. I developed a modularized epidermal radio-frequency energy transfer epidermal device to eliminate batteries and power cables for wearable electronics. Finally, I present a clinical study that validates a commercialized ESS on patients with nerve disorders for electromyography monitoring during peripheral nerve and spinal cord surgeries

    Interfaces de fala silenciosa multimodais para português europeu com base na articulação

    Get PDF
    Doutoramento conjunto MAPi em InformáticaThe concept of silent speech, when applied to Human-Computer Interaction (HCI), describes a system which allows for speech communication in the absence of an acoustic signal. By analyzing data gathered during different parts of the human speech production process, Silent Speech Interfaces (SSI) allow users with speech impairments to communicate with a system. SSI can also be used in the presence of environmental noise, and in situations in which privacy, confidentiality, or non-disturbance are important. Nonetheless, despite recent advances, performance and usability of Silent Speech systems still have much room for improvement. A better performance of such systems would enable their application in relevant areas, such as Ambient Assisted Living. Therefore, it is necessary to extend our understanding of the capabilities and limitations of silent speech modalities and to enhance their joint exploration. Thus, in this thesis, we have established several goals: (1) SSI language expansion to support European Portuguese; (2) overcome identified limitations of current SSI techniques to detect EP nasality (3) develop a Multimodal HCI approach for SSI based on non-invasive modalities; and (4) explore more direct measures in the Multimodal SSI for EP acquired from more invasive/obtrusive modalities, to be used as ground truth in articulation processes, enhancing our comprehension of other modalities. In order to achieve these goals and to support our research in this area, we have created a multimodal SSI framework that fosters leveraging modalities and combining information, supporting research in multimodal SSI. The proposed framework goes beyond the data acquisition process itself, including methods for online and offline synchronization, multimodal data processing, feature extraction, feature selection, analysis, classification and prototyping. Examples of applicability are provided for each stage of the framework. These include articulatory studies for HCI, the development of a multimodal SSI based on less invasive modalities and the use of ground truth information coming from more invasive/obtrusive modalities to overcome the limitations of other modalities. In the work here presented, we also apply existing methods in the area of SSI to EP for the first time, noting that nasal sounds may cause an inferior performance in some modalities. In this context, we propose a non-invasive solution for the detection of nasality based on a single Surface Electromyography sensor, conceivable of being included in a multimodal SSI.O conceito de fala silenciosa, quando aplicado a interação humano-computador, permite a comunicação na ausência de um sinal acústico. Através da análise de dados, recolhidos no processo de produção de fala humana, uma interface de fala silenciosa (referida como SSI, do inglês Silent Speech Interface) permite a utilizadores com deficiências ao nível da fala comunicar com um sistema. As SSI podem também ser usadas na presença de ruído ambiente, e em situações em que privacidade, confidencialidade, ou não perturbar, é importante. Contudo, apesar da evolução verificada recentemente, o desempenho e usabilidade de sistemas de fala silenciosa tem ainda uma grande margem de progressão. O aumento de desempenho destes sistemas possibilitaria assim a sua aplicação a áreas como Ambientes Assistidos. É desta forma fundamental alargar o nosso conhecimento sobre as capacidades e limitações das modalidades utilizadas para fala silenciosa e fomentar a sua exploração conjunta. Assim, foram estabelecidos vários objetivos para esta tese: (1) Expansão das linguagens suportadas por SSI com o Português Europeu; (2) Superar as limitações de técnicas de SSI atuais na deteção de nasalidade; (3) Desenvolver uma abordagem SSI multimodal para interação humano-computador, com base em modalidades não invasivas; (4) Explorar o uso de medidas diretas e complementares, adquiridas através de modalidades mais invasivas/intrusivas em configurações multimodais, que fornecem informação exata da articulação e permitem aumentar a nosso entendimento de outras modalidades. Para atingir os objetivos supramencionados e suportar a investigação nesta área procedeu-se à criação de uma plataforma SSI multimodal que potencia os meios para a exploração conjunta de modalidades. A plataforma proposta vai muito para além da simples aquisição de dados, incluindo também métodos para sincronização de modalidades, processamento de dados multimodais, extração e seleção de características, análise, classificação e prototipagem. Exemplos de aplicação para cada fase da plataforma incluem: estudos articulatórios para interação humano-computador, desenvolvimento de uma SSI multimodal com base em modalidades não invasivas, e o uso de informação exata com origem em modalidades invasivas/intrusivas para superar limitações de outras modalidades. No trabalho apresentado aplica-se ainda, pela primeira vez, métodos retirados do estado da arte ao Português Europeu, verificando-se que sons nasais podem causar um desempenho inferior de um sistema de fala silenciosa. Neste contexto, é proposta uma solução para a deteção de vogais nasais baseada num único sensor de eletromiografia, passível de ser integrada numa interface de fala silenciosa multimodal

    Dysarthric speech analysis and automatic recognition using phase based representations

    Get PDF
    Dysarthria is a neurological speech impairment which usually results in the loss of motor speech control due to muscular atrophy and poor coordination of articulators. Dysarthric speech is more difficult to model with machine learning algorithms, due to inconsistencies in the acoustic signal and to limited amounts of training data. This study reports a new approach for the analysis and representation of dysarthric speech, and applies it to improve ASR performance. The Zeros of Z-Transform (ZZT) are investigated for dysarthric vowel segments. It shows evidence of a phase-based acoustic phenomenon that is responsible for the way the distribution of zero patterns relate to speech intelligibility. It is investigated whether such phase-based artefacts can be systematically exploited to understand their association with intelligibility. A metric based on the phase slope deviation (PSD) is introduced that are observed in the unwrapped phase spectrum of dysarthric vowel segments. The metric compares the differences between the slopes of dysarthric vowels and typical vowels. The PSD shows a strong and nearly linear correspondence with the intelligibility of the speaker, and it is shown to hold for two separate databases of dysarthric speakers. A systematic procedure for correcting the underlying phase deviations results in a significant improvement in ASR performance for speakers with severe and moderate dysarthria. In addition, information encoded in the phase component of the Fourier transform of dysarthric speech is exploited in the group delay spectrum. Its properties are found to represent disordered speech more effectively than the magnitude spectrum. Dysarthric ASR performance was significantly improved using phase-based cepstral features in comparison to the conventional MFCCs. A combined approach utilising the benefits of PSD corrections and phase-based features was found to surpass all the previous performance on the UASPEECH database of dysarthric speech
    corecore