29,725 research outputs found

    Emotion and Stress Recognition Related Sensors and Machine Learning Technologies

    Get PDF
    This book includes impactful chapters which present scientific concepts, frameworks, architectures and ideas on sensing technologies and machine learning techniques. These are relevant in tackling the following challenges: (i) the field readiness and use of intrusive sensor systems and devices for capturing biosignals, including EEG sensor systems, ECG sensor systems and electrodermal activity sensor systems; (ii) the quality assessment and management of sensor data; (iii) data preprocessing, noise filtering and calibration concepts for biosignals; (iv) the field readiness and use of nonintrusive sensor technologies, including visual sensors, acoustic sensors, vibration sensors and piezoelectric sensors; (v) emotion recognition using mobile phones and smartwatches; (vi) body area sensor networks for emotion and stress studies; (vii) the use of experimental datasets in emotion recognition, including dataset generation principles and concepts, quality insurance and emotion elicitation material and concepts; (viii) machine learning techniques for robust emotion recognition, including graphical models, neural network methods, deep learning methods, statistical learning and multivariate empirical mode decomposition; (ix) subject-independent emotion and stress recognition concepts and systems, including facial expression-based systems, speech-based systems, EEG-based systems, ECG-based systems, electrodermal activity-based systems, multimodal recognition systems and sensor fusion concepts and (x) emotion and stress estimation and forecasting from a nonlinear dynamical system perspective

    Emotions in context: examining pervasive affective sensing systems, applications, and analyses

    Get PDF
    Pervasive sensing has opened up new opportunities for measuring our feelings and understanding our behavior by monitoring our affective states while mobile. This review paper surveys pervasive affect sensing by examining and considering three major elements of affective pervasive systems, namely; ā€œsensingā€, ā€œanalysisā€, and ā€œapplicationā€. Sensing investigates the different sensing modalities that are used in existing real-time affective applications, Analysis explores different approaches to emotion recognition and visualization based on different types of collected data, and Application investigates different leading areas of affective applications. For each of the three aspects, the paper includes an extensive survey of the literature and finally outlines some of challenges and future research opportunities of affective sensing in the context of pervasive computing

    Five days at outdoor education camp without screens improves preteen skills with nonverbal emotion cues

    Get PDF
    AbstractA field experiment examined whether increasing opportunities for face-to-face interaction while eliminating the use of screen-based media and communication tools improved nonverbal emotionā€“cue recognition in preteens. Fifty-one preteens spent five days at an overnight nature camp where television, computers and mobile phones were not allowed; this group was compared with school-based matched controls (n=54) that retained usual media practices. Both groups took pre- and post-tests that required participants to infer emotional states from photographs of facial expressions and videotaped scenes with verbal cues removed. Change scores for the two groups were compared using gender, ethnicity, media use, and age as covariates. After five days interacting face-to-face without the use of any screen-based media, preteensā€™ recognition of nonverbal emotion cues improved significantly more than that of the control group for both facial expressions and videotaped scenes. Implications are that the short-term effects of increased opportunities for social interaction, combined with time away from screen-based media and digital communication tools, improves a preteenā€™s understanding of nonverbal emotional cues

    Understanding face and eye visibility in front-facing cameras of smartphones used in the wild

    Get PDF
    Commodity mobile devices are now equipped with high-resolution front-facing cameras, allowing applications in biometrics (e.g., FaceID in the iPhone X), facial expression analysis, or gaze interaction. However, it is unknown how often users hold devices in a way that allows capturing their face or eyes, and how this impacts detection accuracy. We collected 25,726 in-the-wild photos, taken from the front-facing camera of smartphones as well as associated application usage logs. We found that the full face is visible about 29% of the time, and that in most cases the face is only partially visible. Furthermore, we identified an influence of users' current activity; for example, when watching videos, the eyes but not the entire face are visible 75% of the time in our dataset. We found that a state-of-the-art face detection algorithm performs poorly against photos taken from front-facing cameras. We discuss how these findings impact mobile applications that leverage face and eye detection, and derive practical implications to address state-of-the art's limitations

    Exploring the Affective Loop

    Get PDF
    Research in psychology and neurology shows that both body and mind are involved when experiencing emotions (Damasio 1994, Davidson et al. 2003). People are also very physical when they try to communicate their emotions. Somewhere in between beings consciously and unconsciously aware of it ourselves, we produce both verbal and physical signs to make other people understand how we feel. Simultaneously, this production of signs involves us in a stronger personal experience of the emotions we express. Emotions are also communicated in the digital world, but there is little focus on users' personal as well as physical experience of emotions in the available digital media. In order to explore whether and how we can expand existing media, we have designed, implemented and evaluated /eMoto/, a mobile service for sending affective messages to others. With eMoto, we explicitly aim to address both cognitive and physical experiences of human emotions. Through combining affective gestures for input with affective expressions that make use of colors, shapes and animations for the background of messages, the interaction "pulls" the user into an /affective loop/. In this thesis we define what we mean by affective loop and present a user-centered design approach expressed through four design principles inspired by previous work within Human Computer Interaction (HCI) but adjusted to our purposes; /embodiment/ (Dourish 2001) as a means to address how people communicate emotions in real life, /flow/ (Csikszentmihalyi 1990) to reach a state of involvement that goes further than the current context, /ambiguity/ of the designed expressions (Gaver et al. 2003) to allow for open-ended interpretation by the end-users instead of simplistic, one-emotion one-expression pairs and /natural but designed expressions/ to address people's natural couplings between cognitively and physically experienced emotions. We also present results from an end-user study of eMoto that indicates that subjects got both physically and emotionally involved in the interaction and that the designed "openness" and ambiguity of the expressions, was appreciated and understood by our subjects. Through the user study, we identified four potential design problems that have to be tackled in order to achieve an affective loop effect; the extent to which users' /feel in control/ of the interaction, /harmony and coherence/ between cognitive and physical expressions/,/ /timing/ of expressions and feedback in a communicational setting, and effects of users' /personality/ on their emotional expressions and experiences of the interaction

    Keystroke dynamics in the pre-touchscreen era

    Get PDF
    Biometric authentication seeks to measure an individualā€™s unique physiological attributes for the purpose of identity verification. Conventionally, this task has been realized via analyses of fingerprints or signature iris patterns. However, whilst such methods effectively offer a superior security protocol compared with password-based approaches for example, their substantial infrastructure costs, and intrusive nature, make them undesirable and indeed impractical for many scenarios. An alternative approach seeks to develop similarly robust screening protocols through analysis of typing patterns, formally known as keystroke dynamics. Here, keystroke analysis methodologies can utilize multiple variables, and a range of mathematical techniques, in order to extract individualsā€™ typing signatures. Such variables may include measurement of the period between key presses, and/or releases, or even key-strike pressures. Statistical methods, neural networks, and fuzzy logic have often formed the basis for quantitative analysis on the data gathered, typically from conventional computer keyboards. Extension to more recent technologies such as numerical keypads and touch-screen devices is in its infancy, but obviously important as such devices grow in popularity. Here, we review the state of knowledge pertaining to authentication via conventional keyboards with a view toward indicating how this platform of knowledge can be exploited and extended into the newly emergent type-based technological contexts
    • ā€¦
    corecore