5 research outputs found

    Towards Open-set Gesture Recognition via Feature Activation Enhancement and Orthogonal Prototype Learning

    Full text link
    Gesture recognition is a foundational task in human-machine interaction (HMI). While there has been significant progress in gesture recognition based on surface electromyography (sEMG), accurate recognition of predefined gestures only within a closed set is still inadequate in practice. It is essential to effectively discern and reject unknown gestures of disinterest in a robust system. Numerous methods based on prototype learning (PL) have been proposed to tackle this open set recognition (OSR) problem. However, they do not fully explore the inherent distinctions between known and unknown classes. In this paper, we propose a more effective PL method leveraging two novel and inherent distinctions, feature activation level and projection inconsistency. Specifically, the Feature Activation Enhancement Mechanism (FAEM) widens the gap in feature activation values between known and unknown classes. Furthermore, we introduce Orthogonal Prototype Learning (OPL) to construct multiple perspectives. OPL acts to project a sample from orthogonal directions to maximize the distinction between its two projections, where unknown samples will be projected near the clusters of different known classes while known samples still maintain intra-class similarity. Our proposed method simultaneously achieves accurate closed-set classification for predefined gestures and effective rejection for unknown gestures. Extensive experiments demonstrate its efficacy and superiority in open-set gesture recognition based on sEMG

    The Effect of Space-filling Curves on the Efficiency of Hand Gesture Recognition Based on sEMG Signals

    Get PDF
    Over the past few years, Deep learning (DL) has revolutionized the field of data analysis. Not only are the algorithmic paradigms changed, but also the performance in various classification and prediction tasks has been significantly improved with respect to the state-of-the-art, especially in the area of computer vision. The progress made in computer vision has produced a spillover in many other domains, such as biomedical engineering. Some recent works are directed towards surface electromyography (sEMG) based hand gesture recognition, often addressed as an image classification problem and solved using tools such as Convolutional Neural Networks (CNN). This paper extends our previous work on the application of the Hilbert space-filling curve for the generation of image representations from multi-electrode sEMG signals, by investigating how the Hilbert curve compares to the Peano- and Z-order space-filling curves. The proposed space-filling mapping methods are evaluated on a variety of network architectures and in some cases yield a classification improvement of at least 3%, when used to structure the inputs before feeding them into the original network architectures

    Short-Term Load Forecasting for Industrial Customers Based on TCN-LightGBM

    Get PDF
    Accurate and rapid load forecasting for industrial customers has been playing a crucial role in modern power systems. Due to the variability of industrial customers' activities, individual industrial loads are usually too volatile to forecast accurately. In this paper, a short-term load forecasting model for industrial customers based on the Temporal Convolution Network (TCN) and Light Gradient Boosting Machine (LightGBM) is proposed. Firstly, a fixed-length sliding time window method is adopted to reconstruct the electrical features. Next, the TCN is utilized to extract the hidden information and long-term temporal relationships in the input features including electrical features, a meteorological feature and date features. Further, a state-of-the-art LightGBM capable of forecasting industrial customers' loads is adopted. The effectiveness of the proposed model is demonstrated by using datasets from different industries in China, Australia and Ireland. Multiple experiments and comparisons with existing models show that the proposed model provides accurate load forecasting results

    Short-Term Load Forecasting for Industrial Customers Based on TCN-LightGBM

    Get PDF
    Accurate and rapid load forecasting for industrial customers has been playing a crucial role in modern power systems. Due to the variability of industrial customers’ activities, individual industrial loads are usually too volatile to forecast accurately. In this paper, a short-term load forecasting model for industrial customers based on the Temporal Convolutional Network (TCN) and Light Gradient Boosting Machine (LightGBM) is proposed. Firstly, a fixed-length sliding time window method is adopted to reconstruct the electrical features. Next, the TCN is utilized to extract the hidden information and long-term temporal relationships in the input features including electrical features, a meteorological feature and date features. Further, a state-of-the-art LightGBM capable of forecasting industrial customers’ loads is adopted. The effectiveness of the proposed model is demonstrated by using datasets from different industries in China, Australia and Ireland. Multiple experiments and comparisons with existing models show that the proposed model provides accurate load forecasting results

    Deep Learning Methods for Hand Gesture Recognition via High-Density Surface Electromyogram (HD-sEMG) Signals

    Get PDF
    Hand Gesture Recognition (HGR) using surface Electromyogram (sEMG) signals can be considered as one of the most important technologies in making efficient Human Machine Interface (HMI) systems. In particular, sEMG-based hand gesture has been a topic of growing interest for development of assistive systems to improve the quality of life in individuals suffering from amputated limbs. Generally speaking, myoelectric prosthetic devices work by classifying existing patterns of the collected sEMG signals and synthesizing intended gestures. While conventional myoelectric control systems, e.g., on/off control or direct-proportional, have potential advantages, challenges such as limited Degree of Freedom (DoF) due to crosstalk have resulted in the emergence of data-driven solutions. More specifically, to improve efficiency, intuitiveness, and the control performance of hand prosthetic systems, several Artificial Intelligence (AI) algorithms ranging from conventional Machine Learning (ML) models to highly complicated Deep Neural Network (DNN) architectures have been designed for sEMG-based hand gesture recognition in myoelectric prosthetic devices. In this thesis, we, first, perform a literature review on hand gesture recognition methods and elaborate on the recently proposed Deep Learning/Machine Learning (DL/ML) models in the literature. Then, our utilized High-Density sEMG (HD-sEMG) dataset is introduced and the rationales behind our main focus on this particular type of sEMG dataset are explained. We, then, develop a Vision Transformer (ViT)-based model for gesture recognition with HD-sEMG signals and evaluate its performance under different conditions such as variable window sizes, number of electrode channels, and model's complexity. We compare its performance with that of two conventional ML and one DL algorithm that are typically adopted in this domain. Furthermore, we introduce another capability of our proposed framework for instantaneous training, which is its ability to classify hand gestures based on a single frame of HD-sEMG dataset. Following that, we introduce the idea of integrating the macroscopic and microscopic neural drive information obtained from HD-sEMG data into a hybrid ViT-based framework for gesture recognition, which outperforms a standalone ViT architecture in terms of classification accuracy. Here, microscopic neural drive information (also called Motor Unit Spike Trains) refers to the neural commands sent by the brain and spinal cord to individual muscle fibers and are extracted from HD-sEMG signals using Blind Source Separation (BSP) algorithms. Finally, we design an alternative and novel hand gesture recognition model based on the less-explored topic of Spiking Neural Networks (SNN), which performs spatio-temporal gesture recognition in an event-based fashion. As opposed to the classical DNN architectures, SNNs are of the capacity to imitate human brain's cognitive function by using biologically inspired models of neurons and synapses. Therefore, they are more biologically explainable and computationally efficient
    corecore