665 research outputs found

    The influence of dynamics and speech on understanding humanoid facial expressions

    Get PDF
    Human communication relies mostly on nonverbal signals expressed through body language. Facial expressions, in particular, convey emotional information that allows people involved in social interactions to mutually judge the emotional states and to adjust its behavior appropriately. First studies aimed at investigating the recognition of facial expressions were based on static stimuli. However, facial expressions are rarely static, especially in everyday social interactions. Therefore, it has been hypothesized that the dynamics inherent in a facial expression could be fundamental in understanding its meaning. In addition, it has been demonstrated that nonlinguistic and linguistic information can contribute to reinforce the meaning of a facial expression making it easier to be recognized. Nevertheless, few studies have been performed on realistic humanoid robots. This experimental work aimed at demonstrating the human-like expressive capability of a humanoid robot by examining whether the effect of motion and vocal content influenced the perception of its facial expressions. The first part of the experiment aimed at studying the recognition capability of two kinds of stimuli related to the six basic expressions (i.e. anger, disgust, fear, happiness, sadness, and surprise): static stimuli, that is, photographs, and dynamic stimuli, that is, video recordings. The second and third parts were focused on comparing the same six basic expressions performed by a virtual avatar and by a physical robot under three different conditions: (1) muted facial expressions, (2) facial expressions with nonlinguistic vocalizations, and (3) facial expressions with an emotionally neutral verbal sentence. The results show that static stimuli performed by a human being and by the robot were more ambiguous than the corresponding dynamic stimuli on which motion and vocalization were associated. This hypothesis has been also investigated with a 3-dimensional replica of the physical robot demonstrating that even in case of a virtual avatar, dynamic and vocalization improve the emotional conveying capability

    A Few Days of A Robot's Life in the Human's World: Toward Incremental Individual Recognition

    Get PDF
    PhD thesisThis thesis presents an integrated framework and implementation for Mertz, an expressive robotic creature for exploring the task of face recognition through natural interaction in an incremental and unsupervised fashion. The goal of this thesis is to advance toward a framework which would allow robots to incrementally ``get to know'' a set of familiar individuals in a natural and extendable way. This thesis is motivated by the increasingly popular goal of integrating robots in the home. In order to be effective in human-centric tasks, the robots must be able to not only recognize each family member, but also to learn about the roles of various people in the household.In this thesis, we focus on two particular limitations of the current technology. Firstly, most of face recognition research concentrate on the supervised classification problem. Currently, one of the biggest problems in face recognition is how to generalize the system to be able to recognize new test data that vary from the training data. Thus, until this problem is solved completely, the existing supervised approaches may require multiple manual introduction and labelling sessions to include training data with enough variations. Secondly, there is typically a large gap between research prototypes and commercial products, largely due to lack of robustness and scalability to different environmental settings.In this thesis, we propose an unsupervised approach which wouldallow for a more adaptive system which can incrementally update thetraining set with more recent data or new individuals over time.Moreover, it gives the robots a more natural {\em socialrecognition} mechanism to learn not only to recognize each person'sappearance, but also to remember some relevant contextualinformation that the robot observed during previous interactionsessions. Therefore, this thesis focuses on integrating anunsupervised and incremental face recognition system within aphysical robot which interfaces directly with humans through naturalsocial interaction. The robot autonomously detects, tracks, andsegments face images during these interactions and automaticallygenerates a training set for its face recognition system. Moreover,in order to motivate robust solutions and address scalabilityissues, we chose to put the robot, Mertz, in unstructured publicenvironments to interact with naive passersby, instead of with onlythe researchers within the laboratory environment.While an unsupervised and incremental face recognition system is acrucial element toward our target goal, it is only a part of thestory. A face recognition system typically receives eitherpre-recorded face images or a streaming video from a static camera.As illustrated an ACLU review of a commercial face recognitioninstallation, a security application which interfaces with thelatter is already very challenging. In this case, our target goalis a robot that can recognize people in a home setting. Theinterface between robots and humans is even more dynamic. Both therobots and the humans move around.We present the robot implementation and its unsupervised incremental face recognition framework. We describe analgorithm for clustering local features extracted from a large set of automatically generated face data. We demonstrate the robot's capabilities and limitations in a series of experiments at a public lobby. In a final experiment, the robot interacted with a few hundred individuals in an eight day period and generated a training set of over a hundred thousand face images. We evaluate the clustering algorithm performance across a range of parameters on this automatically generated training data and also the Honda-UCSD video face database. Lastly, we present some recognition results using the self-labelled clusters

    Affective Computing

    Get PDF
    This book provides an overview of state of the art research in Affective Computing. It presents new ideas, original results and practical experiences in this increasingly important research field. The book consists of 23 chapters categorized into four sections. Since one of the most important means of human communication is facial expression, the first section of this book (Chapters 1 to 7) presents a research on synthesis and recognition of facial expressions. Given that we not only use the face but also body movements to express ourselves, in the second section (Chapters 8 to 11) we present a research on perception and generation of emotional expressions by using full-body motions. The third section of the book (Chapters 12 to 16) presents computational models on emotion, as well as findings from neuroscience research. In the last section of the book (Chapters 17 to 22) we present applications related to affective computing

    Machine Performers: Agents in a Multiple Ontological State

    Get PDF
    In this thesis, the author explores and develops new attributes for machine performers and merges the trans-disciplinary fields of the performing arts and artificial intelligence. The main aim is to redefine the term “embodiment” for robots on the stage and to demonstrate that this term requires broadening in various fields of research. This redefining has required a multifaceted theoretical analysis of embodiment in the field of artificial intelligence (e.g. the uncanny valley), as well as the construction of new robots for the stage by the author. It is hoped that these practical experimental examples will generate more research by others in similar fields. Even though the historical lineage of robotics is engraved with theatrical strategies and dramaturgy, further application of constructive principles from the performing arts and evidence from psychology and neurology can shift the perception of robotic agents both on stage and in other cultural environments. In this light, the relation between representation, movement and behaviour of bodies has been further explored to establish links between constructed bodies (as in artificial intelligence) and perceived bodies (as performers on the theatrical stage). In the course of this research, several practical works have been designed and built, and subsequently presented to live audiences and research communities. Audience reactions have been analysed with surveys and discussions. Interviews have also been conducted with choreographers, curators and scientists about the value of machine performers. The main conclusions from this study are that fakery and mystification can be used as persuasive elements to enhance agency. Morphologies can also be applied that tightly couple brain and sensorimotor actions and lead to a stronger stage presence. In fact, if this lack of presence is left out of human replicants, it causes an “uncanny” lack of agency. Furthermore, the addition of stage presence leads to stronger identification from audiences, even for bodies dissimilar to their own. The author demonstrates that audience reactions are enhanced by building these effects into machine body structures: rather than identification through mimicry, this causes them to have more unambiguously biological associations. Alongside these traits, atmospheres such as those created by a cast of machine performers tend to cause even more intensely visceral responses. In this thesis, “embodiment” has emerged as a paradigm shift – as well as within this shift – and morphological computing has been explored as a method to deepen this visceral immersion. Therefore, this dissertation considers and builds machine performers as “true” performers for the stage, rather than mere objects with an aura. Their singular and customized embodiment can enable the development of non-anthropocentric performances that encompass the abstract and conceptual patterns in motion and generate – as from human performers – empathy, identification and experiential reactions in live audiences

    Investigating Gaze of Children with ASD in Naturalistic Settings.

    Get PDF
    BACKGROUND: Visual behavior is known to be atypical in Autism Spectrum Disorders (ASD). Monitor-based eye-tracking studies have measured several of these atypicalities in individuals with Autism. While atypical behaviors are known to be accentuated during natural interactions, few studies have been made on gaze behavior in natural interactions. In this study we focused on i) whether the findings done in laboratory settings are also visible in a naturalistic interaction; ii) whether new atypical elements appear when studying visual behavior across the whole field of view. METHODOLOGY/PRINCIPAL FINDINGS: Ten children with ASD and ten typically developing children participated in a dyadic interaction with an experimenter administering items from the Early Social Communication Scale (ESCS). The children wore a novel head-mounted eye-tracker, measuring gaze direction and presence of faces across the child's field of view. The analysis of gaze episodes to faces revealed that children with ASD looked significantly less and for shorter lapses of time at the experimenter. The analysis of gaze patterns across the child's field of view revealed that children with ASD looked downwards and made more extensive use of their lateral field of view when exploring the environment. CONCLUSIONS/SIGNIFICANCE: The data gathered in naturalistic settings confirm findings previously obtained only in monitor-based studies. Moreover, the study allowed to observe a generalized strategy of lateral gaze in children with ASD when they were looking at the objects in their environment

    Robotic assistants in therapy and education of children with autism: Can a small humanoid robot help encourage social interaction skills?

    Get PDF
    This article presents a longitudinal study with four children with autism, who were exposed to a humanoid robot over a period of several months. The longitudinal approach allowed the children time to explore the space of robot– human, as well as human–human interaction. Based on the video material documenting the interactions, a quantitative and qualitative analysis was conducted. The quantitative analysis showed an increase in duration of pre-defined behaviours towards the later trials. A qualitative analysis of the video data, observing the children’s activities in their interactional context, revealed further aspects of social interaction skills (imitation, turn-taking and role- switch) and communicative competence that the children showed. The results clearly demonstrate the need for, and benefits of, long-term studies in order to reveal the full potential of robots in the therapy and education of children with autism

    A Biosymtic (Biosymbiotic Robotic) Approach to Human Development and Evolution. The Echo of the Universe.

    Get PDF
    In the present work we demonstrate that the current Child-Computer Interaction paradigm is not potentiating human development to its fullest – it is associated with several physical and mental health problems and appears not to be maximizing children’s cognitive performance and cognitive development. In order to potentiate children’s physical and mental health (including cognitive performance and cognitive development) we have developed a new approach to human development and evolution. This approach proposes a particular synergy between the developing human body, computing machines and natural environments. It emphasizes that children should be encouraged to interact with challenging physical environments offering multiple possibilities for sensory stimulation and increasing physical and mental stress to the organism. We created and tested a new set of computing devices in order to operationalize our approach – Biosymtic (Biosymbiotic Robotic) devices: “Albert” and “Cratus”. In two initial studies we were able to observe that the main goal of our approach is being achieved. We observed that, interaction with the Biosymtic device “Albert”, in a natural environment, managed to trigger a different neurophysiological response (increases in sustained attention levels) and tended to optimize episodic memory performance in children, compared to interaction with a sedentary screen-based computing device, in an artificially controlled environment (indoors) - thus a promising solution to promote cognitive performance/development; and that interaction with the Biosymtic device “Cratus”, in a natural environment, instilled vigorous physical activity levels in children - thus a promising solution to promote physical and mental health

    Choreographic and Somatic Approaches for the Development of Expressive Robotic Systems

    Full text link
    As robotic systems are moved out of factory work cells into human-facing environments questions of choreography become central to their design, placement, and application. With a human viewer or counterpart present, a system will automatically be interpreted within context, style of movement, and form factor by human beings as animate elements of their environment. The interpretation by this human counterpart is critical to the success of the system's integration: knobs on the system need to make sense to a human counterpart; an artificial agent should have a way of notifying a human counterpart of a change in system state, possibly through motion profiles; and the motion of a human counterpart may have important contextual clues for task completion. Thus, professional choreographers, dance practitioners, and movement analysts are critical to research in robotics. They have design methods for movement that align with human audience perception, can identify simplified features of movement for human-robot interaction goals, and have detailed knowledge of the capacity of human movement. This article provides approaches employed by one research lab, specific impacts on technical and artistic projects within, and principles that may guide future such work. The background section reports on choreography, somatic perspectives, improvisation, the Laban/Bartenieff Movement System, and robotics. From this context methods including embodied exercises, writing prompts, and community building activities have been developed to facilitate interdisciplinary research. The results of this work is presented as an overview of a smattering of projects in areas like high-level motion planning, software development for rapid prototyping of movement, artistic output, and user studies that help understand how people interpret movement. Finally, guiding principles for other groups to adopt are posited.Comment: Under review at MDPI Arts Special Issue "The Machine as Artist (for the 21st Century)" http://www.mdpi.com/journal/arts/special_issues/Machine_Artis
    corecore