1,273 research outputs found

    Desenvolvimento de metodologia baseada em aprendizado por reforço e Sistema de Inferência Fuzzy para identificação e minimização de contaminantes em sinais de sEMG com aplicação em identificação de movimentos do segmento mão-braço

    Get PDF
    A incessante busca por novas tecnologias que proporcionem aumento da qualidade de vida do ser humano tem norteado a pesquisa acadêmica ao longo da história. Isso é observado na evolução dos meios de transporte, dos dispositivos de comunicação e até mesmo de serviços como o bancário. No entanto, para pessoas com deficiência motora, em especial aquelas que sofreram amputação ou não possuem parte do membro superior, a conquista de melhores condições de vida está potencialmente relacionada com liberdade e independência. Visando suprir esta necessidade, muitos pesquisadores têm trabalhado no desenvolvimento de algoritmos preditores de movimento do segmento mão-braço a partir de sinais de eletromiografia para o controle de próteses na expectativa de aumentar o número de graus de liberdade do dispositivo. Contudo, para que se obtenha sistemas eficientes e que tenham elevados índices de assertividade, é imprescindível que o nível de interferência e ruído, os quais inevitavelmente estão presentes nos registros de eletromiografia devido à instrumentação, ambiente, aspectos fisiológicos, dentre outros, seja o menor possível. Neste contexto, alguns trabalhos foram desenvolvidos visando a minimização do efeito de interferências no classificador, contudo todos aqueles abrangidos pela pesquisa realizada demandam um estágio de treinamento off-line, não são adaptáveis às variações do sinal de EMG e/ou dependem do sinal dos outros canais de medição para a minimização do efeito degradador. Diante disso, a presente proposta de tese apresenta uma metodologia baseada em aprendizagem por reforço (Reinforcement Learning) e Sistema de Inferência Fuzzy para detecção, identificação do tipo e atenuação do efeito de contaminantes em registros de eletromiografia, com aplicação em sistemas de reconhecimento de gestos do membro superior. O mesmo está fundamentado em um modelo de agente e ambiente, sendo constituído dos seguintes elementos: ambiente (atividade elétrica muscular), estado (conjunto de 6 características extraídas do sinal de EMG), ações (aplicação de filtros/procedimentos específicos para a redução do impacto de cada interferência) e agente (controlador que fará a identificação do tipo da contaminação e executará a ação adequada). Para cada ação exercida pelo agente será atribuída uma recompensa a qual, por sua vez, é determinada em virtude do impacto da primeira nas características do sinal (estado) por meio de um Sistema de Inferência Fuzzy. O treinamento, realizado através do método Ator-Crítico, consiste na obtenção de uma política de ações que maximize a recompensa percebida a longo prazo. Por meio de um experimento realizado de forma off-line conseguiu-se taxas de acerto de 92,96% na identificação de 4 tipos de contaminantes (interferência por eletrocardiografia (ECG), artefato de movimento, interferência eletromagnética oriunda da rede de energia elétrica e ruído branco gaussiano) e 69,5% quando se considerou também sinal íntegro. Além disso, por meio de um estudo de caso simulando-se o treinamento online do agente evidenciou-se que o modelo de Transfer Learning adotado foi eficaz na dispensa da necessidade do uso de dados adquiridos previamente do usuário além de acelerar o processo de aprendizado. Estas propriedades são fundamentais para a implementação de qualquer sistema de forma online. Logo, verificou-se indícios de que o SIF-ACRL tem, de fato, potencial para ser implementado de forma online.The incessant search for new technologies that provide increased quality of life for human beings has guided academic research throughout history. This is observed in the evolution of transports, communication devices and even services such as banking. However, for people with motor disabilities, especially those who have had an amputation or do not have part of the upper limb, achieving better living conditions is potentially related to freedom and independence. To meet this need, many researchers have been working on the development of hand-arm segment movement predictors algorithms from electromyography signals for the control of prostheses in the hope of increasing the device's degrees of freedom. However, to obtain efficient systems that have high levels of assertiveness, it is essential that the interference and noise level, which are inevitably present in the electromyography records due to the instrumentation, environment, physiological aspects, among others, is the lowest possible. In this context, some works were developed aiming at minimizing the effect of interference in the classifier, however, all those covered by the performed research demand an offline training stage, are not adaptable to the EMG signal variations, and/or depend on the signal of others measurement channels to minimize the degrading effect. In view of this, the present thesis proposal presents a methodology based on Reinforcement Learning and Fuzzy Inference System for detection, identification of the type and mitigation of the effect of contaminants in electromyography records, with application in gesture recognition systems of the upper limb. It is based on an agent and environment model, consisting of the following elements: environment (muscle electrical activity), state (set of 6 characteristics extracted from the EMG signal), actions (application of specific filters/procedures to reduce impact of each interference) and agent (controller who will identify the type of contamination and take the appropriate action). For each action performed by the agent, a reward will be attributed which, in turn, is determined by the impact of the actions on the signal features (state) by means of a Fuzzy Inference System. The training, carried out through the Actor-Critic method, consists of obtaining an action policy that maximizes the long term perceived reward. Through an experiment carried out offline, success rates of 92.96% were achieved in the identification of 4 types of contaminants (interference by electrocardiography (ECG), motion artifact, electromagnetic interference from the electricity network and Gaussian white noise) and 69.5% when a clean signal class was added. In addition, a case study simulating the agent's online training showed that the Transfer Learning model adopted was effective in dispensing with the need to use data previously acquired from the user, in addition to accelerating the learning process. These properties are fundamental for the implementation of any system online. Therefore, there were indications that the SIF-ACRL has the potential to be implemented online

    A Deep Learning Sequential Decoder for Transient High-Density Electromyography in Hand Gesture Recognition Using Subject-Embedded Transfer Learning

    Full text link
    Hand gesture recognition (HGR) has gained significant attention due to the increasing use of AI-powered human-computer interfaces that can interpret the deep spatiotemporal dynamics of biosignals from the peripheral nervous system, such as surface electromyography (sEMG). These interfaces have a range of applications, including the control of extended reality, agile prosthetics, and exoskeletons. However, the natural variability of sEMG among individuals has led researchers to focus on subject-specific solutions. Deep learning methods, which often have complex structures, are particularly data-hungry and can be time-consuming to train, making them less practical for subject-specific applications. In this paper, we propose and develop a generalizable, sequential decoder of transient high-density sEMG (HD-sEMG) that achieves 73% average accuracy on 65 gestures for partially-observed subjects through subject-embedded transfer learning, leveraging pre-knowledge of HGR acquired during pre-training. The use of transient HD-sEMG before gesture stabilization allows us to predict gestures with the ultimate goal of counterbalancing system control delays. The results show that the proposed generalized models significantly outperform subject-specific approaches, especially when the training data is limited, and there is a significant number of gesture classes. By building on pre-knowledge and incorporating a multiplicative subject-embedded structure, our method comparatively achieves more than 13% average accuracy across partially observed subjects with minimal data availability. This work highlights the potential of HD-sEMG and demonstrates the benefits of modeling common patterns across users to reduce the need for large amounts of data for new users, enhancing practicality

    Elderly Fall Detection Systems: A Literature Survey

    Get PDF
    Falling is among the most damaging event elderly people may experience. With the ever-growing aging population, there is an urgent need for the development of fall detection systems. Thanks to the rapid development of sensor networks and the Internet of Things (IoT), human-computer interaction using sensor fusion has been regarded as an effective method to address the problem of fall detection. In this paper, we provide a literature survey of work conducted on elderly fall detection using sensor networks and IoT. Although there are various existing studies which focus on the fall detection with individual sensors, such as wearable ones and depth cameras, the performance of these systems are still not satisfying as they suffer mostly from high false alarms. Literature shows that fusing the signals of different sensors could result in higher accuracy and lower false alarms, while improving the robustness of such systems. We approach this survey from different perspectives, including data collection, data transmission, sensor fusion, data analysis, security, and privacy. We also review the benchmark data sets available that have been used to quantify the performance of the proposed methods. The survey is meant to provide researchers in the field of elderly fall detection using sensor networks with a summary of progress achieved up to date and to identify areas where further effort would be beneficial

    From Unimodal to Multimodal: improving the sEMG-Based Pattern Recognition via deep generative models

    Full text link
    Multimodal hand gesture recognition (HGR) systems can achieve higher recognition accuracy. However, acquiring multimodal gesture recognition data typically requires users to wear additional sensors, thereby increasing hardware costs. This paper proposes a novel generative approach to improve Surface Electromyography (sEMG)-based HGR accuracy via virtual Inertial Measurement Unit (IMU) signals. Specifically, we trained a deep generative model based on the intrinsic correlation between forearm sEMG signals and forearm IMU signals to generate virtual forearm IMU signals from the input forearm sEMG signals at first. Subsequently, the sEMG signals and virtual IMU signals were fed into a multimodal Convolutional Neural Network (CNN) model for gesture recognition. To evaluate the performance of the proposed approach, we conducted experiments on 6 databases, including 5 publicly available databases and our collected database comprising 28 subjects performing 38 gestures, containing both sEMG and IMU data. The results show that our proposed approach outperforms the sEMG-based unimodal HGR method (with increases of 2.15%-13.10%). It demonstrates that incorporating virtual IMU signals, generated by deep generative models, can significantly enhance the accuracy of sEMG-based HGR. The proposed approach represents a successful attempt to transition from unimodal HGR to multimodal HGR without additional sensor hardware

    SleepEEGNet: Automated Sleep Stage Scoring with Sequence to Sequence Deep Learning Approach

    Full text link
    Electroencephalogram (EEG) is a common base signal used to monitor brain activity and diagnose sleep disorders. Manual sleep stage scoring is a time-consuming task for sleep experts and is limited by inter-rater reliability. In this paper, we propose an automatic sleep stage annotation method called SleepEEGNet using a single-channel EEG signal. The SleepEEGNet is composed of deep convolutional neural networks (CNNs) to extract time-invariant features, frequency information, and a sequence to sequence model to capture the complex and long short-term context dependencies between sleep epochs and scores. In addition, to reduce the effect of the class imbalance problem presented in the available sleep datasets, we applied novel loss functions to have an equal misclassified error for each sleep stage while training the network. We evaluated the proposed method on different single-EEG channels (i.e., Fpz-Cz and Pz-Oz EEG channels) from the Physionet Sleep-EDF datasets published in 2013 and 2018. The evaluation results demonstrate that the proposed method achieved the best annotation performance compared to current literature, with an overall accuracy of 84.26%, a macro F1-score of 79.66% and Cohen's Kappa coefficient = 0.79. Our developed model is ready to test with more sleep EEG signals and aid the sleep specialists to arrive at an accurate diagnosis. The source code is available at https://github.com/SajadMo/SleepEEGNet

    Deep Learning for Processing Electromyographic Signals: a Taxonomy-based Survey

    Get PDF
    Deep Learning (DL) has been recently employed to build smart systems that perform incredibly well in a wide range of tasks, such as image recognition, machine translation, and self-driving cars. In several fields the considerable improvement in the computing hardware and the increasing need for big data analytics has boosted DL work. In recent years physiological signal processing has strongly benefited from deep learning. In general, there is an exponential increase in the number of studies concerning the processing of electromyographic (EMG) signals using DL methods. This phenomenon is mostly explained by the current limitation of myoelectric controlled prostheses as well as the recent release of large EMG recording datasets, e.g. Ninapro. Such a growing trend has inspired us to seek and review recent papers focusing on processing EMG signals using DL methods. Referring to the Scopus database, a systematic literature search of papers published between January 2014 and March 2019 was carried out, and sixty-five papers were chosen for review after a full text analysis. The bibliometric research revealed that the reviewed papers can be grouped in four main categories according to the final application of the EMG signal analysis: Hand Gesture Classification, Speech and Emotion Classification, Sleep Stage Classification and Other Applications. The review process also confirmed the increasing trend in terms of published papers, the number of papers published in 2018 is indeed four times the amount of papers published the year before. As expected, most of the analyzed papers (≈60 %) concern the identification of hand gestures, thus supporting our hypothesis. Finally, it is worth reporting that the convolutional neural network (CNN) is the most used topology among the several involved DL architectures, in fact, the sixty percent approximately of the reviewed articles consider a CNN

    On the Utility of Representation Learning Algorithms for Myoelectric Interfacing

    Get PDF
    Electrical activity produced by muscles during voluntary movement is a reflection of the firing patterns of relevant motor neurons and, by extension, the latent motor intent driving the movement. Once transduced via electromyography (EMG) and converted into digital form, this activity can be processed to provide an estimate of the original motor intent and is as such a feasible basis for non-invasive efferent neural interfacing. EMG-based motor intent decoding has so far received the most attention in the field of upper-limb prosthetics, where alternative means of interfacing are scarce and the utility of better control apparent. Whereas myoelectric prostheses have been available since the 1960s, available EMG control interfaces still lag behind the mechanical capabilities of the artificial limbs they are intended to steer—a gap at least partially due to limitations in current methods for translating EMG into appropriate motion commands. As the relationship between EMG signals and concurrent effector kinematics is highly non-linear and apparently stochastic, finding ways to accurately extract and combine relevant information from across electrode sites is still an active area of inquiry.This dissertation comprises an introduction and eight papers that explore issues afflicting the status quo of myoelectric decoding and possible solutions, all related through their use of learning algorithms and deep Artificial Neural Network (ANN) models. Paper I presents a Convolutional Neural Network (CNN) for multi-label movement decoding of high-density surface EMG (HD-sEMG) signals. Inspired by the successful use of CNNs in Paper I and the work of others, Paper II presents a method for automatic design of CNN architectures for use in myocontrol. Paper III introduces an ANN architecture with an appertaining training framework from which simultaneous and proportional control emerges. Paper Iv introduce a dataset of HD-sEMG signals for use with learning algorithms. Paper v applies a Recurrent Neural Network (RNN) model to decode finger forces from intramuscular EMG. Paper vI introduces a Transformer model for myoelectric interfacing that do not need additional training data to function with previously unseen users. Paper vII compares the performance of a Long Short-Term Memory (LSTM) network to that of classical pattern recognition algorithms. Lastly, paper vIII describes a framework for synthesizing EMG from multi-articulate gestures intended to reduce training burden
    corecore