36 research outputs found

    Surface Electromyography and Artificial Intelligence for Human Activity Recognition - A Systematic Review on Methods, Emerging Trends Applications, Challenges, and Future Implementation

    Get PDF
    Human activity recognition (HAR) has become increasingly popular in recent years due to its potential to meet the growing needs of various industries. Electromyography (EMG) is essential in various clinical and biological settings. It is a metric that helps doctors diagnose conditions that affect muscle activation patterns and monitor patients’ progress in rehabilitation, disease diagnosis, motion intention recognition, etc. This review summarizes the various research papers based on HAR with EMG. Over recent years, the integration of Artificial Intelligence (AI) has catalyzed remarkable advancements in the classification of biomedical signals, with a particular focus on EMG data. Firstly, this review meticulously curates a wide array of research papers that have contributed significantly to the evolution of EMG-based activity recognition. By surveying the existing literature, we provide an insightful overview of the key findings and innovations that have propelled this field forward. It explore the various approaches utilized for preprocessing EMG signals, including noise reduction, baseline correction, filtering, and normalization, ensure that the EMG data is suitably prepared for subsequent analysis. In addition, we unravel the multitude of techniques employed to extract meaningful features from raw EMG data, encompassing both time-domain and frequency-domain features. These techniques are fundamental to achieving a comprehensive characterization of muscle activity patterns. Furthermore, we provide an extensive overview of both Machine Learning (ML) and Deep Learning (DL) classification methods, showcasing their respective strengths, limitations, and real-world applications in recognizing diverse human activities from EMG signals. In examining the hardware infrastructure for HAR with EMG, the synergy between hardware and software is underscored as paramount for enabling real-time monitoring. Finally, we also discovered open issues and future research direction that may point to new lines of inquiry for ongoing research toward EMG-based detection.publishedVersio

    FabricTouch: A Multimodal Fabric Assessment Touch Gesture Dataset to Slow Down Fast Fashion

    Get PDF
    Touch exploration of fabric is used to evaluate its properties, and it could further be leveraged to understand a consumer’s sensory experience and preference so as to support them in real time to make careful clothing purchase decisions. In this paper, we open up opportunities to explore the use of technology to provide such support with our FabricTouch dataset, i.e., a multimodal dataset of fabric assessment touch gestures. The dataset consists of bilateral forearm movement and muscle activity data captured while 15 people explored 114 different garments in total to evaluate them according to 5 properties (warmth, thickness, smoothness, softness, and flexibility). The dataset further includes subjective ratings of the garments with respect to each property and ratings of pleasure experienced in exploring the garment through touch. We further report baseline work on automatic detection. Our results suggest that it is possible to recognise the type of fabric property that a consumer is exploring based on their touch behaviour. We obtained mean F1 score of 0.61 for unseen garments, for 5 types of fabric property. The results also highlight the possibility of additionally recognizing the consumer’s subjective rating of the fabric when the property being rated is known, mean F1 score of 0.97 for unseen subjects, for 3 rating levels

    Development of a new robust hybrid automata algorithm based on surface electromyography (SEMG) signal for instrumented wheelchair control

    Get PDF
    Instrumented wheelchair operates based on surface electromyography (sEMG) is one of alternative to assist impairment person for mobility. SEMG is chosen due to good in accuracy and easier preparation to place the electrodes. Motor neuron transmit electrical potential to muscle fibre to perform isometric, concentric or eccentric contraction. These electrical changes that is called Motor Unit Action Potential (MUAP) can be acquired and amplified by electrodes located on targeted muscles changes can be recorded and analysed using sEMG devices. But, sEMG device cost up to USD 2,100 for a sEMG data acquisition device that available on market is one of the drawback to be used by impairment person that most of them has financial problem due to unable to work like before. In addition, it is a closed source system that cannot be modified to improve the accuracy and adding more features. Open source system such as Arduino has limitation of specifications that makes able to apply nonpattern recognition control methods which is simpler and easier compared to pattern recognition. However, classification accuracy is lower than pattern recognition and it cannot be applied to higher number participants from different background and gender. This research aims are to develop an open-source Arduino based sEMG data acquisition device by formulating hybrid automata algorithm to differentiate MUAP activity during wheelchair propulsion. Addition of hybrid automata algorithm to run pattern and non-pattern recognition based control methods is an advantage to increase accuracy in differentiating forward stroke or hand return activity. Electrodes are placed on Biceps (BIC), Triceps (TRI), Extensor (EXT), Flexor (FIX) and MUAP activity recorded for 30 healthy persons. Then, experiment result was validated with simulation result using OpenSim biomedical modelling software. Mean, standard deviation (SD), confidence interval (CI) and maximum point different (MPD) of MUAP were calculated and to be used as thresholds for non-pattern recognition control method in method selection experiment. Meanwhile, pattern recognition is using Probability Density Function (PDF) to determine MUAP according to type of activities. Total of ten control methods determined from population and individual data were tested against another 10 healthy persons to evaluate the algorithm performance. Assessment of each control method done by misclassification matrix looking at True Positive (TP) and False Negative (FN) of power assist system activation period. Developed sEMG data acquisition device that is operated by Arduino MEGA 2560 and Myoware muscle sensors with sampling rate of above 400Hz successfully recorded MUAP from four arm muscles. Furthermore, 2.5 ms of average data latency for device to record, analyse, validate and creating commands to activate the power assist system. Data obtained from the device shows that most active muscle during wheelchair propulsion is TRI, followed by BIC and matched to OpenSim simulation result. In method selection experiment, 96.28% of average accuracy was achieved and different control methods were selected by misclassification matrix for each of persons. This method would be a control method to activate power assist system and selected based on conditions set in the algorithm. These findings indicated that open source Arduino board is capable of running real time pattern, non-pattern recognition based control methods by producing classification accuracy up to 99.48% even though it is known as just a microcontroller that has limitation to run complex classifiers. At the same time, a device that cost less than USD200 has 400Hz of sampling rate is as good as closed source device that is come with expensive price tag to own it. Based on algorithm evaluation, it shows that one control method couldn’t fit to all persons as per proven in method selection experiment. Different person has different control method that suit them the most. Lastly, BIC and TRI can be reference muscles to activate assistive device in instrumented wheelchair that is using propulsion as indication

    A multimodal human-robot sign language interaction framework applied in social robots

    Get PDF
    Deaf-mutes face many difficulties in daily interactions with hearing people through spoken language. Sign language is an important way of expression and communication for deaf-mutes. Therefore, breaking the communication barrier between the deaf-mute and hearing communities is significant for facilitating their integration into society. To help them integrate into social life better, we propose a multimodal Chinese sign language (CSL) gesture interaction framework based on social robots. The CSL gesture information including both static and dynamic gestures is captured from two different modal sensors. A wearable Myo armband and a Leap Motion sensor are used to collect human arm surface electromyography (sEMG) signals and hand 3D vectors, respectively. Two modalities of gesture datasets are preprocessed and fused to improve the recognition accuracy and to reduce the processing time cost of the network before sending it to the classifier. Since the input datasets of the proposed framework are temporal sequence gestures, the long-short term memory recurrent neural network is used to classify these input sequences. Comparative experiments are performed on an NAO robot to test our method. Moreover, our method can effectively improve CSL gesture recognition accuracy, which has potential applications in a variety of gesture interaction scenarios not only in social robots

    Evaluación de modelos para el reconocimiento de gestos en señales biométricas, para un usuario con movilidad reducida

    Get PDF
    This paper compares the results of three computational models (pattern recognition, hidden Markov models, and bag of features) for recognizing the hand gestures of a user with reduced mobility using biometric signal processing. The evaluation of the models included eight gestures co-designed with a person with reduced mobility. The models were evaluated using a cross-validation scheme, calculating sensitivity and precision metrics, and a data set of ten repetitions of each gesture. It can be concluded that the bag-of-features model achieved the best performance considering the two metrics under evaluation; the traditional pattern recognition model, using vector support machines, produced the most stable results; and the hidden Markov models had the lowest performance.Este trabajo presenta los resultados de una comparación de tres modelos computaciones (reconocimiento de patrones, modelos ocultos de Markov y bolsas de características), para el reconocimiento de gestos por medio del procesamiento de señales biométricas, para un usuario con movilidad reducida. La evaluación involucra ocho gestos diseñados de forma participativa con un usuario con problemas de movilidad y se desarrolló mediante un esquema de validación cruzada, en el que se calcularon métricas de sensibilidad y precisión, para un conjunto de datos formado por diez repeticiones de cada gesto. Los resultados obtenidos permitieron concluir que las bolsas de características son el modelo con mejor desempeño para las dos métricas evaluadas. El modelo de tradicional de reconocimiento de patrones al usar máquinas de soporte vectorial mostró los resultados más estables y los modelos ocultos de Markov presentaron el desempeño más bajo

    A standardised and cost-effective VR approach for powered wheelchair training

    Get PDF
    Mastering wheelchair driving skills is essential for the safety of wheelchair users (WUs), yet the acquisition of these skills can be challenging, and training resources can be costly or not available. Technologies such as virtual reality (VR) have grown in popularity as they can provide a motivating training environment without the risks found in real-life training. However, these approaches often deploy navigation controllers which are different from the ones WUs utilise, and do not use a standardised approach in assessing the acquisition of skills. We propose a VR training system based on the wheelchair skills training program (WSTP) and utilizing a sensor device that can be retrofitted to any joystick and communicates wirelessly with a Head-Mounted Display. In this paper, we present a first-validation study with fourteen able-bodied participants, split between a VR test group and a non-VR control group. To determine the acquisition of skills, participants complete tasks in real-life before and after the VR training, where completion time and length of joystick movements are measured. We also assess our system using heart rate measurements, the WSTP questionnaire, the simulator sickness questionnaire and the igroup presence questionnaire. We found that the VR training facilitates the acquisition of skills for more challenging tasks; thus, our system has the potential of being used for training skills of powered wheelchair users, with the benefit of conducting the training in safely and in a low-cost setup
    corecore