21 research outputs found

    A Physiological Approach to Affective Computing

    Get PDF

    Infrared Camera-Based Non-contact Measurement of Brain Activity From Pupillary Rhythms

    Get PDF
    Pupillary responses are associated with affective processing, cognitive function, perception, memory, attention, and other brain activities involving neural pathways. The present study aimed to develop a noncontact system to measure brain activity based on pupillary rhythms using an infra-red web camera. Electroencephalogram (EEG) signals and pupil imaging of 70 undergraduate volunteers (35 female, 35 male) were measured in response to sound stimuli designed to evoke arousal, relaxation, happiness, sadness, or neutral responses. This study successfully developed a real-time system that could detect an EEG spectral index (relative power: low beta in FP1; mid beta in FP1; SMR in FP1; beta in F3; high beta in F8; gamma P4; mu in C4) from pupillary rhythms using the synchronization phenomenon in harmonic frequency (1/100 f) between the pupil and brain oscillations. This method was effective in measuring and evaluating brain activity using a simple, low-cost, noncontact system, and may be an alternative to previous methods used to evaluate brain activity

    Heart Rate Estimated from Body Movements at Six Degrees of Freedom by Convolutional Neural Networks

    Full text link
    Cardiac activity has been monitored continuously in daily life by virtue of advanced medical instruments with microelectromechanical system (MEMS) technology. Seismocardiography (SCG) has been considered to be free from the burden of measurement for cardiac activity, but it has been limited in its application in daily life. The most important issues regarding SCG are to overcome the limitations of motion artifacts due to the sensitivity of motion sensor. Although novel adaptive filters for noise cancellation have been developed, they depend on the researcher’s subjective decision. Convolutional neural networks (CNNs) can extract significant features from data automatically without a researcher’s subjective decision, so that signal processing has been recently replaced as CNNs. Thus, this study aimed to develop a novel method to enhance heart rate estimation from thoracic movement by CNNs. Thoracic movement was measured by six-axis accelerometer and gyroscope signals using a wearable sensor that can be worn by simply clipping on clothes. The dataset was collected from 30 participants (15 males, 15 females) using 12 measurement conditions according to two physical conditions (i.e., relaxed and aroused conditions), three body postures (i.e., sitting, standing, and supine), and six movement speeds (i.e., 3.2, 4.5, 5.8, 6.4, 8.5, and 10.3 km/h). The motion data (i.e., six-axis accelerometer and gyroscope) and heart rate (i.e., electrocardiogram (ECG)) were determined as the input data and labels in the dataset, respectively. The CNN model was developed based on VGG Net and optimized by testing according to network depth and data augmentation. The ensemble network of the VGG-16 without data augmentation and the VGG-19 with data augmentation was determined as optimal architecture for generalization. As a result, the proposed method showed higher accuracy than the previous SCG method using signal processing in most measurement conditions. The three main contributions are as follows: (1) the CNN model enhanced heart rate estimation with the benefits of automatic feature extraction from the data; (2) the proposed method was compared with the previous SCG method using signal processing; (3) the method was tested in 12 measurement conditions related to daily motion for a more practical application

    Vagal Tone Differences in Empathy Level Elicited by Different Emotions and a Co-Viewer

    Full text link
    Empathy can bring different benefits depending on what kind of emotions people empathize with. For example, empathy with negative emotions can raise donations to charity while empathy with positive emotions can increase participation during remote education. However, few studies have focused on the physiological differences depending on what kind of emotions people empathize with. Furthermore, co-viewer can influence the elicitation of different levels of empathy, but this has been less discussed. Therefore, this study investigated vagal response differences according to each empathy factor level elicited by different emotions and co-viewer. Fifty-nine participants were asked to watch 4 videos and to evaluate subjective valence, arousal scores, and undertake an empathy questionnaire, which included cognitive, affective and identification empathy. Half of the participants watched the videos alone and the other half watched the videos with a co-viewer. Valence and arousal scores were categorized into three levels to figure out what kind of emotions they empathized with. Empathy level (high vs. low) was determined based on the self-report scores. Two-way MANOVA revealed an interaction effect of empathy level and emotions. High affective empathy level is associated with higher vagal response regardless of what kind of emotions they empathized with. However, vagal response differences in other empathy factor level showed a different pattern depending on what kind of emotions that participant empathized with. A high cognitive empathy level showed lower vagal responses when participants felt negative or positive valence. High identification level also showed increased cognitive burden when participants empathized with negative and neutral valence. The results implied that emotions and types of empathy should be considered when measuring empathic responses using vagal tone. Two-way MANOVA revealed empathic response differences between co-viewer condition and emotion. Participants with a co-viewer felt higher vagal responses and self-reporting empathy scores only when participants empathized with arousal. This implied that the effect of a co-viewer may impact on empathic responses only when participants felt higher emotional intensity

    Special Issue “Emotion Intelligence Based on Smart Sensing”

    Full text link
    Emotional intelligence is essential to maintaining human relationships in communities, organizations, and societies [...

    Spatial and Time Domain Feature of ERP Speller System Extracted via Convolutional Neural Network

    Full text link
    Feature of event-related potential (ERP) has not been completely understood and illiteracy problem remains unsolved. To this end, P300 peak has been used as the feature of ERP in most brain–computer interface applications, but subjects who do not show such peak are common. Recent development of convolutional neural network provides a way to analyze spatial and temporal features of ERP. Here, we train the convolutional neural network with 2 convolutional layers whose feature maps represented spatial and temporal features of event-related potential. We have found that nonilliterate subjects’ ERP show high correlation between occipital lobe and parietal lobe, whereas illiterate subjects only show correlation between neural activities from frontal lobe and central lobe. The nonilliterates showed peaks in P300, P500, and P700, whereas illiterates mostly showed peaks in around P700. P700 was strong in both subjects. We found that P700 peak may be the key feature of ERP as it appears in both illiterate and nonilliterate subjects

    An Enhanced Method to Estimate Heart Rate from Seismocardiography via Ensemble Averaging of Body Movements at Six Degrees of Freedom

    Full text link
    Continuous cardiac monitoring has been developed to evaluate cardiac activity outside of clinical environments due to the advancement of novel instruments. Seismocardiography (SCG) is one of the vital components that could develop such a monitoring system. Although SCG has been presented with a lower accuracy, this novel cardiac indicator has been steadily proposed over traditional methods such as electrocardiography (ECG). Thus, it is necessary to develop an enhanced method by combining the significant cardiac indicators. In this study, the six-axis signals of accelerometer and gyroscope were measured and integrated by the L2 normalization and multi-dimensional kineticardiography (MKCG) approaches, respectively. The waveforms of accelerometer and gyroscope were standardized and combined via ensemble averaging, and the heart rate was calculated from the dominant frequency. Thirty participants (15 females) were asked to stand or sit in relaxed and aroused conditions. Their SCG was measured during the task. As a result, proposed method showed higher accuracy than traditional SCG methods in all measurement conditions. The three main contributions are as follows: (1) the ensemble averaging enhanced heart rate estimation with the benefits of the six-axis signals; (2) the proposed method was compared with the previous SCG method that employs fewer-axis; and (3) the method was tested in various measurement conditions for a more practical application

    Fusion Method to Estimate Heart Rate from Facial Videos Based on RPPG and RBCG

    Full text link
    Remote sensing of vital signs has been developed to improve the measurement environment by using a camera without a skin-contact sensor. The camera-based method is based on two concepts, namely color and motion. The color-based method, remote photoplethysmography (RPPG), measures the color variation of the face generated by reflectance of blood, whereas the motion-based method, remote ballistocardiography (RBCG), measures the subtle motion of the head generated by heartbeat. The main challenge of remote sensing is overcoming the noise of illumination variance and motion artifacts. The studies on remote sensing have focused on the blind source separation (BSS) method for RGB colors or motions of multiple facial points to overcome the noise. However, they have still been limited in their real-world applications. This study hypothesized that BSS-based combining of colors and the motions can improve the accuracy and feasibility of remote sensing in daily life. Thus, this study proposed a fusion method to estimate heart rate based on RPPG and RBCG by the BSS methods such as ensemble averaging (EA), principal component analysis (PCA), and independent component analysis (ICA). The proposed method was verified by comparing it with previous RPPG and RBCG from three datasets according to illumination variance and motion artifacts. The three main contributions of this study are as follows: (1) the proposed method based on RPPG and RBCG improved the remote sensing with the benefits of each measurement; (2) the proposed method was demonstrated by comparing it with previous methods; and (3) the proposed method was tested in various measurement conditions for more practical applications

    Individual’s Social Perception of Virtual Avatars Embodied with Their Habitual Facial Expressions and Facial Appearance

    Full text link
    With the prevalence of virtual avatars and the recent emergence of metaverse technology, there has been an increase in users who express their identity through an avatar. The research community focused on improving the realistic expressions and non-verbal communication channels of virtual characters to create a more customized experience. However, there is a lack in the understanding of how avatars can embody a user’s signature expressions (i.e., user’s habitual facial expressions and facial appearance) that would provide an individualized experience. Our study focused on identifying elements that may affect the user’s social perception (similarity, familiarity, attraction, liking, and involvement) of customized virtual avatars engineered considering the user’s facial characteristics. We evaluated the participant’s subjective appraisal of avatars that embodied the participant’s habitual facial expressions or facial appearance. Results indicated that participants felt that the avatar that embodied their habitual expressions was more similar to them than the avatar that did not. Furthermore, participants felt that the avatar that embodied their appearance was more familiar than the avatar that did not. Designers should be mindful about how people perceive individuated virtual avatars in order to accurately represent the user’s identity and help users relate to their avatar

    The Analysis of Emotion Authenticity Based on Facial Micromovements

    Full text link
    People tend to display fake expressions to conceal their true feelings. False expressions are observable by facial micromovements that occur for less than a second. Systems designed to recognize facial expressions (e.g., social robots, recognition systems for the blind, monitoring systems for drivers) may better understand the user’s intent by identifying the authenticity of the expression. The present study investigated the characteristics of real and fake facial expressions of representative emotions (happiness, contentment, anger, and sadness) in a two-dimensional emotion model. Participants viewed a series of visual stimuli designed to induce real or fake emotions and were signaled to produce a facial expression at a set time. From the participant’s expression data, feature variables (i.e., the degree and variance of movement, and vibration level) involving the facial micromovements at the onset of the expression were analyzed. The results indicated significant differences in the feature variables between the real and fake expression conditions. The differences varied according to facial regions as a function of emotions. This study provides appraisal criteria for identifying the authenticity of facial expressions that are applicable to future research and the design of emotion recognition systems
    corecore