10 research outputs found
Optimal Clustering under Uncertainty
Classical clustering algorithms typically either lack an underlying
probability framework to make them predictive or focus on parameter estimation
rather than defining and minimizing a notion of error. Recent work addresses
these issues by developing a probabilistic framework based on the theory of
random labeled point processes and characterizing a Bayes clusterer that
minimizes the number of misclustered points. The Bayes clusterer is analogous
to the Bayes classifier. Whereas determining a Bayes classifier requires full
knowledge of the feature-label distribution, deriving a Bayes clusterer
requires full knowledge of the point process. When uncertain of the point
process, one would like to find a robust clusterer that is optimal over the
uncertainty, just as one may find optimal robust classifiers with uncertain
feature-label distributions. Herein, we derive an optimal robust clusterer by
first finding an effective random point process that incorporates all
randomness within its own probabilistic structure and from which a Bayes
clusterer can be derived that provides an optimal robust clusterer relative to
the uncertainty. This is analogous to the use of effective class-conditional
distributions in robust classification. After evaluating the performance of
robust clusterers in synthetic mixtures of Gaussians models, we apply the
framework to granular imaging, where we make use of the asymptotic
granulometric moment theory for granular images to relate robust clustering
theory to the application.Comment: 19 pages, 5 eps figures, 1 tabl
A User-Specific Hand Gesture Recognition Model Based on Feed-Forward Neural Networks, EMGs, and Correction of Sensor Orientation
Hand gesture recognition systems have several applications including medicine and engineering. A gesture recognition system should identify the class, time, and duration of a gesture executed by a user. Gesture recognition systems based on electromyographies (EMGs) produce good results when the EMG sensor is placed on the same orientation for training and testing. However, when the orientation of the sensor changes between training and testing, which is very common in practice, the classification and recognition accuracies degrade significantly. In this work, we propose a system for recognizing, in real time, five gestures of the right hand. These gestures are the same ones recognized by the proprietary system of the Myo armband. The proposed system is based on the use of a shallow artificial feed-forward neural network. This network takes as input the covariances between the channels of an EMG and the result of a bag of five functions applied to each channel of an EMG. To correct the rotation of the EMG sensor, we also present an algorithm based on finding the channel of maximum energy given a set of synchronization EMGs, which for this work correspond to the gesture waveout. The classification and recognition accuracies obtained here show that the recognition system, together with the algorithm for correcting the orientation, allows a user to wear the EMG sensor in different orientations for training and testing, without a significant performance reduction. Finally, to reproduce the results obtained in this paper, we have made the code and the dataset used here publicly available
Hand Gesture Recognition Using Automatic Feature Extraction and Deep Learning Algorithms with Memory
Gesture recognition is widely used to express emotions or to communicate with other people or machines. Hand gesture recognition is a problem of great interest to researchers because it is a high-dimensional pattern recognition problem. The high dimensionality of the problem is directly related to the performance of machine learning models. The dimensionality problem can be addressed through feature selection and feature extraction. In this sense, the evaluation of a model with manual feature extraction and automatic feature extraction was proposed. The manual feature extraction was performed using the statistical functions of central tendency, while the automatic extraction was performed by means of a CNN and BiLSTM. These features were also evaluated in classifiers such as Softmax, ANN, and SVM. The best-performing model was the combination of BiLSTM and ANN (BiLSTM-ANN), with an accuracy of 99.9912%
Analysis and Evaluation of Feature Selection and Feature Extraction Methods
Abstract Hand gestures are widely used in human-to-human and human-to-machine communication. Therefore, hand gesture recognition is a topic of great interest. Hand gesture recognition is closely related to pattern recognition, where overfitting can occur when there are many predictors relative to the size of the training set. Therefore, it is necessary to reduce the dimensionality of the feature vectors through feature selection techniques. In addition, the need for portability in hand gesture recognition systems limits the use of deep learning algorithms. In this sense, a study of feature selection and extraction methods is proposed for the use of traditional machine learning algorithms. The feature selection methods analyzed are: maximum relevance and minimum redundancy (MRMR), Sequential, neighbor component analysis without parameters (NCAsp), neighbor component analysis with parameters (NCAp), Relief-F, and decision tree (DT). We also analyze the behavior of feature selection methods using classification and recognition accuracy and processing time. Feature selection methods were fed through seventeen feature extraction functions, which return a score proportional to its importance. The functions are then ranked according to their scores and fed to machine learning algorithms such as Artificial Neural Networks (ANN), Support Vector Machine (SVM), K-Nearest Neighbor (KNN), and Decision Tree (DT). This work demonstrates that all feature selection methods evaluated on ANN provide better accuracy. In addition, the combination and number of feature extraction functions influence the accuracy and processing time
Sign Language Recognition Using Wearable Electronics: Implementing k-Nearest Neighbors with Dynamic Time Warping and Convolutional Neural Network Algorithms
We propose a sign language recognition system based on wearable electronics and two different classification algorithms. The wearable electronics were made of a sensory glove and inertial measurement units to gather fingers, wrist, and arm/forearm movements. The classifiers were k-Nearest Neighbors with Dynamic Time Warping (that is a non-parametric method) and Convolutional Neural Networks (that is a parametric method). Ten sign-words were considered from the Italian Sign Language: cose, grazie, maestra, together with words with international meaning such as google, internet, jogging, pizza, television, twitter, and ciao. The signs were repeated one-hundred times each by seven people, five male and two females, aged 29–54 y ± 10.34 (SD). The adopted classifiers performed with an accuracy of 96.6% ± 3.4 (SD) for the k-Nearest Neighbors plus the Dynamic Time Warping and of 98.0% ± 2.0 (SD) for the Convolutional Neural Networks. Our system was made of wearable electronics among the most complete ones, and the classifiers top performed in comparison with other relevant works reported in the literature
Hand Gesture Recognition Using EMG-IMU Signals and Deep Q-Networks
Hand gesture recognition systems (HGR) based on electromyography signals (EMGs) and inertial measurement unit signals (IMUs) have been studied for different applications in recent years. Most commonly, cutting-edge HGR methods are based on supervised machine learning methods. However, the potential benefits of reinforcement learning (RL) techniques have shown that these techniques could be a viable option for classifying EMGs. Methods based on RL have several advantages such as promising classification performance and online learning from experience. In this work, we developed an HGR system made up of the following stages: pre-processing, feature extraction, classification, and post-processing. For the classification stage, we built an RL-based agent capable of learning to classify and recognize eleven hand gestures—five static and six dynamic—using a deep Q-network (DQN) algorithm based on EMG and IMU information. The proposed system uses a feed-forward artificial neural network (ANN) for the representation of the agent policy. We carried out the same experiments with two different types of sensors to compare their performance, which are the Myo armband sensor and the G-force sensor. We performed experiments using training, validation, and test set distributions, and the results were evaluated for user-specific HGR models. The final accuracy results demonstrated that the best model was able to reach up to 97.50%±1.13% and 88.15%±2.84% for the classification and recognition, respectively, with regard to static gestures, and 98.95%±0.62% and 90.47%±4.57% for the classification and recognition, respectively, with regard to dynamic gestures with the Myo armband sensor. The results obtained in this work demonstrated that RL methods such as the DQN are capable of learning a policy from online experience to classify and recognize static and dynamic gestures using EMG and IMU signals
Recognition of Hand Gestures Based on EMG Signals with Deep and Double-Deep Q-Networks
In recent years, hand gesture recognition (HGR) technologies that use electromyography (EMG) signals have been of considerable interest in developing human–machine interfaces. Most state-of-the-art HGR approaches are based mainly on supervised machine learning (ML). However, the use of reinforcement learning (RL) techniques to classify EMGs is still a new and open research topic. Methods based on RL have some advantages such as promising classification performance and online learning from the user’s experience. In this work, we propose a user-specific HGR system based on an RL-based agent that learns to characterize EMG signals from five different hand gestures using Deep Q-network (DQN) and Double-Deep Q-Network (Double-DQN) algorithms. Both methods use a feed-forward artificial neural network (ANN) for the representation of the agent policy. We also performed additional tests by adding a long–short-term memory (LSTM) layer to the ANN to analyze and compare its performance. We performed experiments using training, validation, and test sets from our public dataset, EMG-EPN-612. The final accuracy results demonstrate that the best model was DQN without LSTM, obtaining classification and recognition accuracies of up to 90.37%±10.7% and 82.52%±10.9%, respectively. The results obtained in this work demonstrate that RL methods such as DQN and Double-DQN can obtain promising results for classification and recognition problems based on EMG signals
CNN-LSTM and post-processing for EMG-based hand gesture recognition
Hand Gesture Recognition (HGR) using electromyography (EMG) signals is a challenging problem due to the variability and noise in the signals across individuals. This study addresses this challenge by examining the effect of incorporating a post-processing algorithm, which filters the sequence of predictions and removes spurious labels, on the performance of a HGR model based on spectrograms and Convolutional Neural Networks (CNN). The study also compares CNN vs CNN-LSTM to assess the influence of the memory cells on the model. The EMG-EPN-612 dataset, which contains measurements of EMG signals for 5 hand gestures from 612 subjects, was used for training and testing. The results showed that the post-processing algorithm increased the recognition accuracy by 41.86% for the CNN model and 24.77% for the CNN-LSTM model. The inclusion of the memory cells increased accuracy by 3.29%, but at the cost of 53 times more learnables. The CNN-LSTM model with post-processing achieved a mean recognition accuracy of 90.55% (SD=9.45%). These findings suggest new paths for research in HGR architectures beyond the traditional focus on the classification and feature extraction stages. For reproducibility purposes, we made publicly available the source code in Github
Educational Impact on Ecuadorian University Students Due to the COVID-19 Context
The mobility restrictions imposed in different countries due to the pandemic of Sars-CoV2 has hugely impacted different areas in the world. In this work, impacts on the social areas of Ecuadorian university students such as education, economy, physical and mental health, and access to telecommunications are analyzed. For this work, in a snapshot between May to September 2020, 1841 students from 6 public and 5 private universities from Ecuador were surveyed through 47 questions, which were grouped into 7 mutually exclusive dimensions. The Partial Least Squares Structural Equation Model (PLS-SEM) was used to analyze the correlations between the responses of the questions and the relations between dimensions. Dimensional relations were used to analyze how students perceive online classes, teachers’ preparation, mood, and the impact on their learning process due to their decreasing family income. Among the most important results, we found that 63.78% of students want to return to on-site classes regardless of their conditions of Internet connection and their available learning tools (computers, tablets, or cellphones). The results also show that family income has influenced how students access the Internet, Internet connection, technological resources for online learning, and mood. Regarding the relations between variables, we found that older students and students from higher semesters think that online classes are better than face-to-face classes and want to continue in online education
An Energy-Based Method for Orientation Correction of EMG Bracelet Sensors in Hand Gesture Recognition Systems
Hand gesture recognition (HGR) systems using electromyography (EMG) bracelet-type sensors are currently largely used over other HGR technologies. However, bracelets are susceptible to electrode rotation, causing a decrease in HGR performance. In this work, HGR systems with an algorithm for orientation correction are proposed. The proposed orientation correction method is based on the computation of the maximum energy channel using a synchronization gesture. Then, the channels of the EMG are rearranged in a new sequence which starts with the maximum energy channel. This new sequence of channels is used for both training and testing. After the EMG channels are rearranged, this signal passes through the following stages: pre-processing, feature extraction, classification, and post-processing. We implemented user-specific and user-general HGR models based on a common architecture which is robust to rotations of the EMG bracelet. Four experiments were performed, taking into account two different metrics which are the classification and recognition accuracy for both models implemented in this work, where each model was evaluated with and without rotation of the bracelet. The classification accuracy measures how well a model predicted which gesture is contained somewhere in a given EMG, whereas recognition accuracy measures how well a model predicted when it occurred, how long it lasted, and which gesture is contained in a given EMG. The results of the experiments (without and with orientation correction) executed show an increase in performance from 44.5% to 81.2% for classification and from 43.3% to 81.3% for recognition in user-general models, while in user-specific models, the results show an increase in performance from 39.8% to 94.9% for classification and from 38.8% to 94.2% for recognition. The results obtained in this work evidence that the proposed method for orientation correction makes the performance of an HGR robust to rotations of the EMG bracelet