6 research outputs found

    Sympathetic Loading in Critical Tasks

    Get PDF
    In this dissertation I developed or perfected unobtrusive methods to quantify sympathetic arousals. Furthermore, I used these methods to study the sympathetic system's role on critical activities, arriving at intriguing conclusions. Sympathetic arousals occur during states of mental, emotional, and/or sensorimotor strain resulting from adverse or demanding circumstances. They are key elements of human physiology's coping mechanism, shoring up resources to a good effect. When the intensity and duration of these arousals are overwhelming, however, then they may block memory and disrupt rational thought or actions at the moment they are needed the most. Arousals abound in three types of critical activities: high-stakes situations, challenging tasks, and critical multitasking. Accordingly, my research was based on three studies representative of these three activity types: `Subject Screening', `Educational Exam', and `Distracted Driving'. In the first study I investigated the association of sympathetic arousals with deceptive behavior in interrogations. In the second study, I investigated the relationship between sympathetic arousals and exam performance. In the third study, I investigated the interaction between sympathetic arousals and driving performance under cognitive, emotional, and sensorimotor distractions. In the interrogation study, I used for the first time a contact-free electrodermal activity measurement method to quantify arousals. The method detected deceptive behavior based on differential sympathetic responses in well-structured interviews. In the exam study, I documented that sympathetic arousals positively correlate with students' exam performance, dispelling the myth of `easy going' super achievers. Finally, in the driving study, my results revealed that not only apparent sensorimotor stressors (texting while driving) but also hidden stressors (cognitive or emotional) could have a significant effect on driving performance.Computer Science, Department o

    Facilitating the Child–Robot Interaction by Endowing the Robot with the Capability of Understanding the Child Engagement: The Case of Mio Amico Robot

    Get PDF
    AbstractSocial Robots (SRs) are substantially becoming part of modern society, given their frequent use in many areas of application including education, communication, assistance, and entertainment. The main challenge in human–robot interaction is in achieving human-like and affective interaction between the two groups. This study is aimed at endowing SRs with the capability of assessing the emotional state of the interlocutor, by analyzing his/her psychophysiological signals. The methodology is focused on remote evaluations of the subject's peripheral neuro-vegetative activity by means of thermal infrared imaging. The approach was developed and tested for a particularly challenging use case: the interaction between children and a commercial educational robot, Mio Amico Robot, produced by LiscianiGiochi©. The emotional state classified from the thermal signal analysis was compared to the emotional state recognized by a facial action coding system. The proposed approach was reliable and accurate and favored a personalized and improved interaction of children with SRs

    Exploiting Group Structures to Infer Social Interactions From Videos

    Get PDF
    In this thesis, we consider the task of inferring the social interactions between humans by analyzing multi-modal data. Specifically, we attempt to solve some of the problems in interaction analysis, such as long-term deception detection, political deception detection, and impression prediction. In this work, we emphasize the importance of using knowledge about the group structure of the analyzed interactions. Previous works on the matter mostly neglected this aspect and analyzed a single subject at a time. Using the new Resistance dataset, collected by our collaborators, we approach the problem of long-term deception detection by designing a class of histogram-based features and a novel class of meta-features we callLiarRank. We develop a LiarOrNot model to identify spies in Resistance videos. We achieve AUCs of over 0.70 outperforming our baselines by 3% and human judges by 12%. For the problem of political deception, we first collect a dataset of videos and transcripts of 76 politicians from 18 countries making truthful and deceptive statements. We call it the Global Political Deception Dataset. We then show how to analyze the statements in a broader context by building a Video-Article-Topic graph. From this graph, we create a novel class of features called Deception Score that captures how controversial each topic is and how it affects the truthfulness of each statement. We show that our approach achieves 0.775 AUC outperforming competing baselines. Finally, we use the Resistance data to solve the problem of dyadic impression prediction. Our proposed Dyadic Impression Prediction System (DIPS) contains four major innovations: a novel class of features called emotion ranks, sign imbalance features derived from signed graphs theory, a novel method to align the facial expressions of subjects, and finally, we propose the concept of a multilayered stochastic network we call Temporal Delayed Network. Our DIPS architecture beats eight baselines from the literature, yielding statistically significant improvements of 19.9-30.8% in AUC

    Social Perception of Pedestrians and Virtual Agents Using Movement Features

    Get PDF
    In many tasks such as navigation in a shared space, humans explicitly or implicitly estimate social information related to the emotions, dominance, and friendliness of other humans around them. This social perception is critical in predicting others’ motions or actions and deciding how to interact with them. Therefore, modeling social perception is an important problem for robotics, autonomous vehicle navigation, and VR and AR applications. In this thesis, we present novel, data-driven models for the social perception of pedestrians and virtual agents based on their movement cues, including gaits, gestures, gazing, and trajectories. We use deep learning techniques (e.g., LSTMs) along with biomechanics to compute the gait features and combine them with local motion models to compute the trajectory features. Furthermore, we compute the gesture and gaze representations using psychological characteristics. We describe novel mappings between these computed gaits, gestures, gazing, and trajectory features and the various components (emotions, dominance, friendliness, approachability, and deception) of social perception. Our resulting data-driven models can identify the dominance, deception, and emotion of pedestrians from videos with an accuracy of more than 80%. We also release new datasets to evaluate these methods. We apply our data-driven models to socially-aware robot navigation and the navigation of autonomous vehicles among pedestrians. Our method generates robot movement based on pedestrians’ dominance levels, resulting in higher rapport and comfort. We also apply our data-driven models to simulate virtual agents with desired emotions, dominance, and friendliness. We perform user studies and show that our data-driven models significantly increase the user’s sense of social presence in VR and AR environments compared to the baseline methods.Doctor of Philosoph
    corecore