29 research outputs found

    The perception of emotion in artificial agents

    Get PDF
    Given recent technological developments in robotics, artificial intelligence and virtual reality, it is perhaps unsurprising that the arrival of emotionally expressive and reactive artificial agents is imminent. However, if such agents are to become integrated into our social milieu, it is imperative to establish an understanding of whether and how humans perceive emotion in artificial agents. In this review, we incorporate recent findings from social robotics, virtual reality, psychology, and neuroscience to examine how people recognize and respond to emotions displayed by artificial agents. First, we review how people perceive emotions expressed by an artificial agent, such as facial and bodily expressions and vocal tone. Second, we evaluate the similarities and differences in the consequences of perceived emotions in artificial compared to human agents. Besides accurately recognizing the emotional state of an artificial agent, it is critical to understand how humans respond to those emotions. Does interacting with an angry robot induce the same responses in people as interacting with an angry person? Similarly, does watching a robot rejoice when it wins a game elicit similar feelings of elation in the human observer? Here we provide an overview of the current state of emotion expression and perception in social robotics, as well as a clear articulation of the challenges and guiding principles to be addressed as we move ever closer to truly emotional artificial agents

    Face

    Get PDF
    The face is probably the part of the body, which most distinguishes us as individuals. It plays a very important role in many functions, such as speech, mastication, and expression of emotion. In the face, there is a tight coupling between different complex structures, such as skin, fat, muscle, and bone. Biomechanically driven models of the face provide an opportunity to gain insight into how these different facial components interact. The benefits of this insight are manifold, including improved maxillofacial surgical planning, better understanding of speech mechanics, and more realistic facial animations. This chapter provides an overview of facial anatomy followed by a review of previous computational models of the face. These models include facial tissue constitutive relationships, facial muscle models, and finite element models. We also detail our efforts to develop novel general and subject-specific models. We present key results from simulations that highlight the realism of the face models

    A neurocognitive investigation of the impact of socializing with a robot on empathy for pain

    Get PDF
    To what extent can humans form social relationships with robots? In the present study, we combined functional neuroimaging with a robot socializing intervention to probe the flexibility of empathy, a core component of social relationships, towards robots. Twenty-six individuals underwent identical fMRI sessions before and after being issued a social robot to take home and interact with over the course of a week. While undergoing fMRI, participants observed videos of a human actor or a robot experiencing pain or pleasure in response to electrical stimulation. Repetition suppression of activity in the pain network, a collection of brain regions associated with empathy and emotional responding, was measured to test whether socializing with a social robot leads to greater overlap in neural mechanisms when observing human and robotic agents experiencing pain or pleasure. In contrast to our hypothesis, functional region-of-interest analyses revealed no change in neural overlap for agents after the socializing intervention. Similarly, no increase in activation when observing a robot experiencing pain emerged post-socializing. Whole-brain analysis showed that, before the socializing intervention, superior parietal and early visual regions are sensitive to novel agents, while after socializing, medial temporal regions show agent sensitivity. A region of the inferior parietal lobule was sensitive to novel emotions, but only during the pre-socializing scan session. Together, these findings suggest that a short socialization intervention with a social robot does not lead to discernible differences in empathy towards the robot, as measured by behavioural or brain responses. We discuss the extent to which long-term socialization with robots might shape social cognitive processes and ultimately our relationships with these machines. This article is part of the theme issue ‘From social brains to social robots: applying neurocognitive insights to human–robot interaction’

    4D (3D Dynamic) statistical models of conversational expressions and the synthesis of highly-realistic 4D facial expression sequences

    Get PDF
    In this thesis, a novel approach for modelling 4D (3D Dynamic) conversational interactions and synthesising highly-realistic expression sequences is described. To achieve these goals, a fully-automatic, fast, and robust pre-processing pipeline was developed, along with an approach for tracking and inter-subject registering 3D sequences (4D data). A method for modelling and representing sequences as single entities is also introduced. These sequences can be manipulated and used for synthesising new expression sequences. Classification experiments and perceptual studies were performed to validate the methods and models developed in this work. To achieve the goals described above, a 4D database of natural, synced, dyadic conversations was captured. This database is the first of its kind in the world. Another contribution of this thesis is the development of a novel method for modelling conversational interactions. Our approach takes into account the time-sequential nature of the interactions, and encompasses the characteristics of each expression in an interaction, as well as information about the interaction itself. Classification experiments were performed to evaluate the quality of our tracking, inter-subject registration, and modelling methods. To evaluate our ability to model, manipulate, and synthesise new expression sequences, we conducted perceptual experiments. For these perceptual studies, we manipulated modelled sequences by modifying their amplitudes, and had human observers evaluate the level of expression realism and image quality. To evaluate our coupled modelling approach for conversational facial expression interactions, we performed a classification experiment that differentiated predicted frontchannel and backchannel sequences, using the original sequences in the training set. We also used the predicted backchannel sequences in a perceptual study in which human observers rated the level of similarity of the predicted and original sequences. The results of these experiments help support our methods and our claim of our ability to produce 4D, highly-realistic expression sequences that compete with state-of-the-art methods

    Mental content : consequences of the embodied mind paradigm

    Get PDF
    The central difference between objectivist cognitivist semantics and embodied cognition consists in the fact that the latter is, in contrast to the former, mindful of binding meaning to context-sensitive mental systems. According to Lakoff/Johnson's experientialism, conceptual structures arise from preconceptual kinesthetic image-schematic and basic-level structures. Gallese and Lakoff introduced the notion of exploiting sensorimotor structures for higherlevel cognition. Three different types of X-schemas realise three types of environmentally embedded simulation: Areas that control movements in peri-personal space; canonical neurons of the ventral premotor cortex that fire when a graspable object is represented; the firing of mirror neurons while perceiving certain movements of conspecifics. ..

    Improving social and behavioural functioning in children with autism spectrum disorder: a videogame skills based feasibility trial

    Full text link
    This thesis assessed the feasibility of using specifically designed video-games to improve functioning in children with autism spectrum disorder. The findings provide preliminary evidence supporting the use of Whiz Kid Games (a freely accessible, online, video-game based intervention) for improving both social and behavioural functioning in children aged 6-12 who have been diagnosed with autism.<br /

    Elements: the design of an interactive virtual environment for movement rehabilitation of traumatic brain injury patients

    Get PDF
    This exegesis details the development of an interactive art work titled Elements designed to assist upper limb movement rehabilitation for patients recovering from traumatic brain injury. Enhancing physical rehabilitative processes in the early stages following a brain injury is one of the great challenges facing therapists. Elements enables physical user interaction that may present new opportunities for treatment. One of the key problems identified in the neuro-scientific field is that developers of interactive computer systems for movement rehabilitation are often constrained to the use of conventional desktop interfaces. These interfaces often fall short of fostering natural user interaction that translates into the relearning of body movement for patients, particularly in ways that reinforce the embodied relationship between the sensory world of the human body and the predictable effects of bodily movement in relation to the surrounding environment. Interactive multimedia environments that can correlate a patient&amp;rsquo;s sense of embodiment may assist in the acquisition of movement skills that transfer to the real world. The central theme of my exegesis will address these concerns by analysing contemporary theories of embodied interaction as a foundation to design Elements. Designing interactive computer environments for traumatic brain injured patients is, however, a challenging issue. Patients frequently exhibit impaired upper limb function which severely affects activities for daily living and self-care. Elements responds to this level of disability by providing the patient with an intuitive tabletop computer environment that affords basic gestural control. As part of a multidisciplinary project team, I designed the user interfaces, interactive multimedia environments, and audiovisual feedback (visual, haptic and auditory) used to help the patients relearn movement skills. The physical design of the Elements environment consists of a horizontal tabletop graphics display, a stereoscopic computer video tracking system, tangible user interfaces, and a suite of seven interactive software applications. Each application provides the patients with a task geared toward the patient reaching, grasping, lifting, moving, and placing the tangible user interfaces on the display. Audiovisual computer feedback is used by patients to refine their movements online and over time. Patients can manipulate the feedback to create unique aesthetic outcomes in real time. The system design provides tactility, texture, and audiovisual feedback to entice patients to explore their own movement capabilities in externally directed and self-directed ways. This exegesis contributes to the larger research agenda of embodied interaction. My original contribution to knowledge is Elements, an interactive artwork that may enable patients to relearn movement skills, raise their level of self-esteem, sense of achievement, and behavioural skill

    Novel methodologies and technologies for the multiscale and multimodal study of Autism Spectrum Disorders (ASDs)

    Get PDF
    The aim of this PhD thesis was the development of novel bioengineering tools and methodologies that provide a support in the study of ASDs. ASDs are very heterogeneous disturbs and their abnormalities are present both at local and global level. For this reason a multimodal and multiscale approach was followed. The analysis of microstructure was executed on single Purkinje neurons in culture and on organotypic slices extracted from cerebella of GFP wild-type mice and animal models of ASDs. A methodology for the non-invasive imaging of neurons during their growth was set up and a software called NEMO (NEuron MOrphological analysis tool) for the automatic analysis of morphology and connectivity was developed. Microstructure properties can be inferred also in vivo through the quite recent technique of Diffusion Tensor Imaging (DTI). DTI studies in ASDs are based on the hypothesis that the disorder involves aberrant brain connectivity and disruption of white matter tracts between regions implicated in social functioning. In this study DTI was used to investigate structural abnormalities in the white matter structure of young children with ASDs. Moreover the neurostructural bases of echolalia were investigated. The functionality of the brain was analyzed through Functional Magnetic Resonance Imaging (fMRI) using a novel task based on face processing of human, android and robotic faces. A case-control study was performed in order to study how the face processing network is altered in ASDs and how robots are differently processed in ASDs and control groups. Measurements characterizing physiology and behavior of ASD children were also collected using an innovative platform called FACE-T (FACE-Therapy). FACE-T consists of a specially equipped room in which the child, wearing unobtrusive devices for recording physiological and behavioral data as well as gaze information, can interact with an android (FACE, Facial Automaton for Conveying Emotions) and a therapist. The focus was on ECG, as from the analysis of power spectrum density of ECG it is possible to extract features related to the autonomic nervous system that is correlated with brain functionality. These studies give new insights in the study of ASDs exploring aspects not yet addressed. Moreover the methodologies and tools developed could help in the objective characterization of ASD subjects and in the definition of a personalized therapeutic protocol for each child
    corecore