16 research outputs found

    Classification of Human Emotions from EEG Signals using Statistical Features and Neural Network

    Get PDF
    A statistical based system for human emotions classification by using electroencephalogram (EEG) is proposed in this paper. The data used in this study is acquired using EEG and the emotions are elicited from six human subjects under the effect of emotion stimuli. This paper also proposed an emotion stimulation experiment using visual stimuli. From the EEG data, a total of six statistical features are computed and back-propagation neural network is applied for the classification of human emotions. In the experiment of classifying five types of emotions: Anger, Sad, Surprise, Happy, and Neutral. As result the overall classification rate as high as 95% is achieved

    Computing emotion awareness through galvanic skin response and facial electromyography

    Get PDF
    To improve human-computer interaction (HCI), computers need to recognize and respond properly to their user’s emotional state. This is a fundamental application of affective computing, which relates to, arises from, or deliberately influences emotion. As a first step to a system that recognizes emotions of individual users, this research focuses on how emotional experiences are expressed in six parameters (i.e., mean, absolute deviation, standard deviation, variance, skewness, and kurtosis) of not baseline-corrected physiological measurements of the galvanic skin response (GSR) and of three electromyography signals: frontalis (EMG1), corrugator supercilii (EMG2), and zygomaticus major (EMG3). The 24 participants were asked to watch film scenes of 120 seconds, which they rated afterward. These ratings enabled us to distinguish four categories of emotions: negative, positive, mixed, and neutral. The skewness and kurtosis of the GSR, the skewness of the EMG2, and four parameters of EMG3, discriminate between the four emotion categories. This, despite the coarse time windows that were used. Moreover, rapid processing of the signals proved to be possible. This enables tailored HCI facilitated by an emotional awareness of systems

    Affective Man-Machine Interface: Unveiling human emotions through biosignals

    Get PDF
    As is known for centuries, humans exhibit an electrical profile. This profile is altered through various psychological and physiological processes, which can be measured through biosignals; e.g., electromyography (EMG) and electrodermal activity (EDA). These biosignals can reveal our emotions and, as such, can serve as an advanced man-machine interface (MMI) for empathic consumer products. However, such a MMI requires the correct classification of biosignals to emotion classes. This chapter starts with an introduction on biosignals for emotion detection. Next, a state-of-the-art review is presented on automatic emotion classification. Moreover, guidelines are presented for affective MMI. Subsequently, a research is presented that explores the use of EDA and three facial EMG signals to determine neutral, positive, negative, and mixed emotions, using recordings of 21 people. A range of techniques is tested, which resulted in a generic framework for automated emotion classification with up to 61.31% correct classification of the four emotion classes, without the need of personal profiles. Among various other directives for future research, the results emphasize the need for parallel processing of multiple biosignals

    Interpreting Deep Learning Features for Myoelectric Control: A Comparison with Handcrafted Features

    Get PDF
    The research in myoelectric control systems primarily focuses on extracting discriminative representations from the electromyographic (EMG) signal by designing handcrafted features. Recently, deep learning techniques have been applied to the challenging task of EMG-based gesture recognition. The adoption of these techniques slowly shifts the focus from feature engineering to feature learning. However, the black-box nature of deep learning makes it hard to understand the type of information learned by the network and how it relates to handcrafted features. Additionally, due to the high variability in EMG recordings between participants, deep features tend to generalize poorly across subjects using standard training methods. Consequently, this work introduces a new multi-domain learning algorithm, named ADANN, which significantly enhances (p=0.00004) inter-subject classification accuracy by an average of 19.40% compared to standard training. Using ADANN-generated features, the main contribution of this work is to provide the first topological data analysis of EMG-based gesture recognition for the characterisation of the information encoded within a deep network, using handcrafted features as landmarks. This analysis reveals that handcrafted features and the learned features (in the earlier layers) both try to discriminate between all gestures, but do not encode the same information to do so. Furthermore, using convolutional network visualization techniques reveal that learned features tend to ignore the most activated channel during gesture contraction, which is in stark contrast with the prevalence of handcrafted features designed to capture amplitude information. Overall, this work paves the way for hybrid feature sets by providing a clear guideline of complementary information encoded within learned and handcrafted features.Comment: The first two authors shared first authorship. The last three authors shared senior authorship. 32 page

    Subjects taught in VR

    Get PDF

    Affective Computing in the Area of Autism

    Get PDF
    The prevalence rate of Autism Spectrum Disorders (ASD) is increasing at an alarming rate (1 in 68 children). With this increase comes the need of early diagnosis of ASD, timely intervention, and understanding the conditions that could be comorbid to ASD. Understanding co-morbid anxiety and its interaction with emotion comprehension and production in ASD is a growing and multifaceted area of research. Recognizing and producing contingent emotional expressions is a complex task, which is even more difficult for individuals with ASD. First, I investigate the arousal experienced by adolescents with ASD in a group therapy setting. In this study I identify the instances in which the physiological arousal is experienced by adolescents with ASD ( have-it ), see if the facial expressions of these adolescents indicate their arousal ( show-it ), and determine if the adolescents are self-aware of this arousal or not ( know-it ). In order to establish a relationship across these three components of emotion expression and recognition, a multi-modal approach for data collection is utilized. Machine learning techniques are used to determine whether still video images of facial expressions could be used to predict Electrodermal Activity (EDA) data. Implications for the understanding of emotion and social communication difficulties in ASD, as well as future targets for intervention, are discussed. Second, it is hypothesized that a well-designed intervention technique helps in the overall development of children with ASD by improving their level of functioning. I designed and validated a mobile-based intervention designed for teaching social skills to children with ASD. I also evaluated the social skill intervention. Last, I present the research goals behind an mHealth-based screening tool for early diagnosis of ASD in toddlers. The design purpose of this tool is to help people from low-income group, who have limited access to resources. This goal is achieved without burdening the physicians, their staff, and the insurance companies

    Мобільний додаток для покращення психічного стану користувача

    Get PDF
    В бакалаврській роботі розглянуто і проаналізовано алгоритм для визначення емоційних станів людини. Проведено огляд методів роботи з емоціями людини та їх аналізу. Для реалізації системи було обрано згорткові нейронні мережі та було сформовано план експерименту для збору даних для навчання. Було реалізовано модулі системи, що дозволяють визначити емоційний стан людини. Було проаналізовано якість роботи побудованої системи. Модулі системи реалізовані з використанням мови програмування Java та фреймворку Keras, побудований програмний продукт являє собою мобільний додаток для операційної системи Android. Результати свідчать про доцільність проведення подальшої роботи з покращення роботи системи для отримання точності роботи достатньої для практичного застосування.The algorithm for determining human emotional states is considered and analyzed in the work. A review of methods of working with human emotions and their analysis is presented. To implement the system, convolutional neural networks were selected and an experimental plan was formed to collect data for training. Modules of the system were implemented to determine a person’s emotional state. The quality of the constructed system was analyzed. The system modules are implemented using the Java programming language and the Keras framework, the built software product is a mobile application for the Android operating system. The results indicate the feasibility of further work to improve the system to obtain the accuracy of sufficient work for practical application

    Система аналізу біомедичних сігналів для контролю стану водіїв

    Get PDF
    Магістерська дисертація: 120 с., 22 рис., 26 табл., 2 додатки, 57 джерел. Тема роботи – «Система аналізу біомедичних сигналів для контролю стану водіїв». В роботі розглянуто і проаналізовано алгоритм для визначення емоційних станів людини за ЕКГ. Проведено огляд методів роботи з емоціями людини та аналізу її ЕКГ. Для реалізації системи було обрано згорткові нейронні мережі та було сформовано план експерименту для збору даних для навчання. Було реалізовано модулі системи, що дозволяють визначити емоційний стан людини за ЕКГ. Було проаналізовано якість роботи побудованої системи. Модулі системи реалізовані з використанням мов програмування Python та Java та фреймворку Keras. Результати світчать про доцільність проведення подальшої роботи з покращення роботи системи для отримання точності роботи достатньої для практичного застосування. Мета дослідження – створення системи розпізнавання емоцій за біомедичним сигналом. Об’єкт дослідження – моделі емоційного стану людини. Предмет дослідження – методи виявлення емоційного стану людини на основі біосигналів. Наукова новизна – запропоновано алгоритми виявлення емоційних станів людини на основі біомедичного сигналу та згорткових нейронних мереж. Практична цінність – в роботі розроблено додаток, який дозволяє отримувати біомедичний сигнал в режімі реального часу та визначати емоційний стан з точністю до 95%.Master’s thesis: 120 p., 22 fig., 26 tabl., 2 appendixes, 57 sources. The topic of the research: “System for analyzing biomedical signals for determining the emotional state”. In this work an algorithm for determining the emotional states of a person by an electrocardiogram is considered and analyzed. The review of methods of work with human emotions and analysis of electrocardiogram is carried out. To implement the system, convolutional neural networks were selected and the experiment design was created to collect the dataset for training. Modules of the system, which allow estimation of the emotional states of a person by an electrocardiogram, were realized. The quality of the built-in system was analyzed. System modules are implemented using Python and Java programming languages and the Keras framework. The results indicate the expediency of further work to improve the system to obtain the accuracy of work sufficient for practical application. The purpose of the study is to create a system for recognizing emotions by biomedical signal. The object of research - the model of the human emotional state. Subject of research - methods of identifying the emotional state of a person on the basis of biosignals. Scientific novelty - algorithms for identifying emotional state of a person from biomedical signal and CNNs are proposed. Practical value - in the work developed an application that allows you to get a biomedical signal in realtime mode and determine the emotional state with an accuracy of 95%

    Development of an EMG-based Muscle Health Model for Elbow Trauma Patients

    Get PDF
    Musculoskeletal (MSK) conditions are a leading cause of pain and disability worldwide. Rehabilitation is critical for recovery from these conditions and for the prevention of long-term disability. Robot-assisted therapy has been demonstrated to provide improvements to stroke rehabilitation in terms of efficiency and patient adherence. However, there are no wearable robot-assisted solutions for patients with MSK injuries. One of the limiting factors is the lack of appropriate models that allow the use of biosignals as an interface input. Furthermore, there are no models to discern the health of MSK patients as they progress through their therapy. This thesis describes the design, data collection, analysis, and validation of a novel muscle health model for elbow trauma patients. Surface electromyography (sEMG) data sets were collected from the injured arms of elbow trauma patients performing 10 upper-limb motions. The data were assessed and compared to sEMG data collected from the patients\u27 contralateral healthy limbs. A statistical analysis was conducted to identify trends relating the sEMG signals to muscle health. sEMG-based classification models for muscle health were developed. Relevant sEMG features were identified and combined into feature sets for the classification models. The classifiers were used to distinguish between two levels of health: healthy and injured (50% baseline accuracy rate). Classification models based on individual motions achieved cross-validation accuracies of 48.2--79.6%. Following feature selection and optimization of the models, cross-validation accuracies of up to 82.1% were achieved. This work suggests that there is a potential for implementing an EMG-based model of muscle health in a rehabilitative elbow brace to assess patients recovering from MSK elbow trauma. However, more research is necessary to improve the accuracy and the specificity of the classification models
    corecore