63 research outputs found

    Convolutional Neural Network Array for Sign Language Recognition using Wearable IMUs

    Full text link
    Advancements in gesture recognition algorithms have led to a significant growth in sign language translation. By making use of efficient intelligent models, signs can be recognized with precision. The proposed work presents a novel one-dimensional Convolutional Neural Network (CNN) array architecture for recognition of signs from the Indian sign language using signals recorded from a custom designed wearable IMU device. The IMU device makes use of tri-axial accelerometer and gyroscope. The signals recorded using the IMU device are segregated on the basis of their context, such as whether they correspond to signing for a general sentence or an interrogative sentence. The array comprises of two individual CNNs, one classifying the general sentences and the other classifying the interrogative sentence. Performances of individual CNNs in the array architecture are compared to that of a conventional CNN classifying the unsegregated dataset. Peak classification accuracies of 94.20% for general sentences and 95.00% for interrogative sentences achieved with the proposed CNN array in comparison to 93.50% for conventional CNN assert the suitability of the proposed approach.Comment: https://doi.org/10.1109/SPIN.2019.871174

    Investigation of Human Emotion Pattern Based on EEG Signal Using Wavelet Families and Correlation Feature Selection

    Get PDF
    Emotions is one of the advantages given by God to human beings compared to other living creatures. Emotions have an important role in human life. Many studies have been conducted to recognize human emotions using physiological measurements, one of which is Electroencephalograph (EEG). However, the previous researches have not discussed the types of wavelet families that have the best performance and canals that are optimal in the introduction of human emotions. In this paper, the power features of several types of wavelet families, namely Daubechies, symlets, and coiflets with the Correlation Feature Selection (CFS) method to select the best features of alpha, beta, gamma and theta frequencies. According to the results, coiflet is a method of the wavelet family that has the best accuracy value in emotional recognition. The use of the CFS feature selection can improve the accuracy of the results from 81% to 93%, and the five most dominant channels in the power features of alpha and gamma band on T8, T7, C5, CP5, and TP7. Hence, it can be concluded that the temporal of the left brain is more dominant in recognition of human emotions.Emotions is one of the advantages given by God to human beings compared to other living creatures. Emotions have an important role in human life. Many studies have been conducted to recognize human emotions using physiological measurements, one of which is Electroencephalograph (EEG). However, the previous researches have not discussed the types of wavelet families that have the best performance and canals that are optimal in the introduction of human emotions. In this paper, the power features of several types of wavelet families, namely Daubechies, symlets, and coiflets with the Correlation Feature Selection (CFS) method to select the best features of alpha, beta, gamma and theta frequencies. According to the results, coiflet is a method of the wavelet family that has the best accuracy value in emotional recognition. The use of the CFS feature selection can improve the accuracy of the results from 81% to 93%, and the five most dominant channels in the power features of alpha and gamma band on T8, T7, C5, CP5, and TP7. Hence, it can be concluded that the temporal of the left brain is more dominant in recognition of human emotions

    Computación afectiva aplicada a la valoración emocional en contextos gastronómicos

    Get PDF
    Este proyecto de investigación aplicada se orienta a la captura de estados emocionales de personas interactuando en contextos gastronómicos, a través de un Framework Multimodal, con la capacidad de registrar datos tanto fisiológicos como biométricos, contribuye a reflejar el estado emocional con una mayor fidelidad que la obtenida cuando se utilizan pocos medios de recolección de datos o encuestas. Se pretende obtener información en un contexto multimodal, a fin de valorar el grado de placer-rechazo, e intensidad que se producen en una persona frente a diferentes comidas o alimentos.Eje: Innovación en sistemas de software.Red de Universidades con Carreras en Informátic

    A Linear Predictive Coding Filtering Method for Time-resolved Morphology of EEG Activity

    Get PDF
    This paper introduces a new time-resolved spectral analysis method based on the Linear Prediction Coding (LPC) method that is particularly suited to the study of the dynamics of EEG (Electroencephalography) activity. The spectral dynamics of EEG signals can be challenging to analyse as they contain multiple frequency components and are often corrupted by noise. The LPC Filtering (LPCF) method described here processes the LPC poles to generate a series of reduced-order filter transform functions which can accurately estimate the dominant frequencies. The LPCF method is a parameterized time-frequency method that is suitable for identifying the dominant frequencies of multiple-component signals (e.g. EEG signals). We define bias and the frequency resolution metrics to assess the ability of the LPCF method to estimate the frequencies. The experimental results show that the LPCF can reduce the bias of the LPC estimates in the low and high frequency bands and improved frequency resolution. Furthermore, the LPCF method is less sensitive to the filter order and has a higher tolerance of noise compared to the LPC method. Finally, we apply the LPCF method to a real EEG signal where it can identify the dominant frequency in each frequency band and significantly reduce the redundant estimates of the LPC method

    Interacción humano-robot en el contexto de la computación afectiva asociando estados emocionales al comportamiento de un robot

    Get PDF
    Se presentan los resultados preliminares del desarrollo de un framework de interacción emocional Humano-Robot que contribuye con la configuración de estados emocionales, del tipo de robot (físicos o virtual) y sus acciones asociadas en respuesta al estado emocional. Para este proyecto, se trabajó en la integración de distintos sistemas entre ellos se destaca el software Emotion Detection Asset, que se encargará de reconocer emociones a través de expresiones faciales, capturadas por medio de una webcam o de una imagen importada desde un archivo; interfase de usuario por cuál se puede realizar diferentes configuraciones; robots físicos (Roboreptile) y/o virtuales, para la representación o ejecución de acciones en respuesta a las emociones capturadas del humano, finalmente se realizan pruebas con software de reconocimiento de emociones propietario. En la primera sección “introducción” se presentan las características generales del área de computación afectiva, enfoque categórico de emociones, modelos multimodales y unimodales, emociones, finalmente se presenta una síntesis comparativa de los trabajos específicos de emociones y robots. En la segunda sección se presenta sintéticamente el problema, en la tercera sección se plantea la solución desarrollada, en la cuarta sección se presentan las pruebas preliminares, finalmente en la quinta sección se enuncian las conclusiones y futuras líneas de trabajo.XV Workshop Innovación en sistemas de Software (WISS)Red de Universidades con Carreras en Informátic

    Graph Attention Based Spatial Temporal Network for EEG Signal Representation

    Get PDF
    Graph attention networks (GATs) based architectures have proved to be powerful at implicitly learning relationships between adjacent nodes in a graph. For electroencephalogram (EEG) signals, however, it is also essential to highlight electrode locations or underlying brain regions which are active when a particular event related potential (ERP) is evoked. Moreover, it is often im-portant to identify corresponding EEG signal time segments within which the ERP is activated. We introduce a GAT Inspired Spatial Temporal (GIST) net-work that uses multilayer GAT as its base for three attention blocks: edge atten-tions, followed by node attention and temporal attention layers, which focus on relevant brain regions and time windows for better EEG signal classification performance, and interpretability. We assess the capability of the architecture by using publicly available Transcranial Electrical Stimulation (TES), neonatal pain (NP) and DREAMER EEG datasets. With these datasets, the model achieves competitive performance. Most importantly, the paper presents atten-tion visualisation and suggests ways of interpreting them for EEG signal under-standing
    corecore