32,882 research outputs found

    Multimodal Observation and Interpretation of Subjects Engaged in Problem Solving

    Get PDF
    In this paper we present the first results of a pilot experiment in the capture and interpretation of multimodal signals of human experts engaged in solving challenging chess problems. Our goal is to investigate the extent to which observations of eye-gaze, posture, emotion and other physiological signals can be used to model the cognitive state of subjects, and to explore the integration of multiple sensor modalities to improve the reliability of detection of human displays of awareness and emotion. We observed chess players engaged in problems of increasing difficulty while recording their behavior. Such recordings can be used to estimate a participant's awareness of the current situation and to predict ability to respond effectively to challenging situations. Results show that a multimodal approach is more accurate than a unimodal one. By combining body posture, visual attention and emotion, the multimodal approach can reach up to 93% of accuracy when determining player's chess expertise while unimodal approach reaches 86%. Finally this experiment validates the use of our equipment as a general and reproducible tool for the study of participants engaged in screen-based interaction and/or problem solving

    Exploiting the robot kinematic redundancy for emotion conveyance to humans as a lower priority task

    Get PDF
    Current approaches do not allow robots to execute a task and simultaneously convey emotions to users using their body motions. This paper explores the capabilities of the Jacobian null space of a humanoid robot to convey emotions. A task priority formulation has been implemented in a Pepper robot which allows the specification of a primary task (waving gesture, transportation of an object, etc.) and exploits the kinematic redundancy of the robot to convey emotions to humans as a lower priority task. The emotions, defined by Mehrabian as points in the pleasure–arousal–dominance space, generate intermediate motion features (jerkiness, activity and gaze) that carry the emotional information. A map from this features to the joints of the robot is presented. A user study has been conducted in which emotional motions have been shown to 30 participants. The results show that happiness and sadness are very well conveyed to the user, calm is moderately well conveyed, and fear is not well conveyed. An analysis on the dependencies between the motion features and the emotions perceived by the participants shows that activity correlates positively with arousal, jerkiness is not perceived by the user, and gaze conveys dominance when activity is low. The results indicate a strong influence of the most energetic motions of the emotional task and point out new directions for further research. Overall, the results show that the null space approach can be regarded as a promising mean to convey emotions as a lower priority task.Postprint (author's final draft

    Robust Modeling of Epistemic Mental States

    Full text link
    This work identifies and advances some research challenges in the analysis of facial features and their temporal dynamics with epistemic mental states in dyadic conversations. Epistemic states are: Agreement, Concentration, Thoughtful, Certain, and Interest. In this paper, we perform a number of statistical analyses and simulations to identify the relationship between facial features and epistemic states. Non-linear relations are found to be more prevalent, while temporal features derived from original facial features have demonstrated a strong correlation with intensity changes. Then, we propose a novel prediction framework that takes facial features and their nonlinear relation scores as input and predict different epistemic states in videos. The prediction of epistemic states is boosted when the classification of emotion changing regions such as rising, falling, or steady-state are incorporated with the temporal features. The proposed predictive models can predict the epistemic states with significantly improved accuracy: correlation coefficient (CoERR) for Agreement is 0.827, for Concentration 0.901, for Thoughtful 0.794, for Certain 0.854, and for Interest 0.913.Comment: Accepted for Publication in Multimedia Tools and Application, Special Issue: Socio-Affective Technologie

    Seeing Seeing

    Get PDF
    I argue that we can visually perceive others as seeing agents. I start by characterizing perceptual processes as those that are causally controlled by proximal stimuli. I then distinguish between various forms of visual perspective-taking, before presenting evidence that most of them come in perceptual varieties. In doing so, I clarify and defend the view that some forms of visual perspective-taking are “automatic”—a view that has been marshalled in support of dual-process accounts of mindreading

    Chimpanzee faces under the magnifying glass: emerging methods reveal cross-species similarities and individuality

    Get PDF
    Independently, we created descriptive systems to characterize chimpanzee facial behavior, responding to a common need to have an objective, standardized coding system to ask questions about primate facial behaviors. Even with slightly different systems, we arrive at similar outcomes, with convergent conclusions about chimpanzee facial mobility. This convergence is a validation of the importance of the approach, and provides support for the future use of a facial action coding system for chimpanzees,ChimpFACS. Chimpanzees share many facial behaviors with those of humans. Therefore, processes and mechanisms that explain individual differences in facial activity can be compared with the use of a standardized systems such asChimpFACSandFACS. In this chapter we describe our independent methodological approaches, comparing how we arrived at our facial coding categories. We present some Action Descriptors (ADs) from Gaspar’s initial studies, especially focusing on an ethogram of chimpanzee and bonobo facial behavior, based on studies conducted between 1997 and 2004 at three chimpanzee colonies (The Detroit Zoo; Cleveland Metroparks Zoo; and Burger’s Zoo) and two bonobo colonies (The Columbus Zoo and Aquarium; The Milwaukee County Zoo). We discuss the potential significance of arising issues, the minor qualitative species differences that were found, and the larger quantitative differences in particular facial behaviors observed between species, e.g., bonobos expressed more movements containing particular action units (Brow Lowerer, Lip Raiser, Lip Corner Puller) compared with chimpanzees. The substantial interindividual variation in facial behavior within each species was most striking. Considering individual differences and the impact of development, we highlight the flexibility in facial activity of chimpanzees. We discuss the meaning of facial behaviors in nonhuman primates, addressing specifically individual attributes of Social Attraction, facial expressivity, and the connection of facial behavior to emotion. We do not rule out the communicative function of facial behavior, in which case an individual’s properties of facial behavior are seen as influencing his or her social life, but provide strong arguments in support of the role of facial behavior in the expression of internal states

    The perception of emotion in artificial agents

    Get PDF
    Given recent technological developments in robotics, artificial intelligence and virtual reality, it is perhaps unsurprising that the arrival of emotionally expressive and reactive artificial agents is imminent. However, if such agents are to become integrated into our social milieu, it is imperative to establish an understanding of whether and how humans perceive emotion in artificial agents. In this review, we incorporate recent findings from social robotics, virtual reality, psychology, and neuroscience to examine how people recognize and respond to emotions displayed by artificial agents. First, we review how people perceive emotions expressed by an artificial agent, such as facial and bodily expressions and vocal tone. Second, we evaluate the similarities and differences in the consequences of perceived emotions in artificial compared to human agents. Besides accurately recognizing the emotional state of an artificial agent, it is critical to understand how humans respond to those emotions. Does interacting with an angry robot induce the same responses in people as interacting with an angry person? Similarly, does watching a robot rejoice when it wins a game elicit similar feelings of elation in the human observer? Here we provide an overview of the current state of emotion expression and perception in social robotics, as well as a clear articulation of the challenges and guiding principles to be addressed as we move ever closer to truly emotional artificial agents
    corecore