4,842 research outputs found

    Designing and Implementing Embodied Agents: Learning from Experience

    Get PDF
    In this paper, we provide an overview of part of our experience in designing and implementing some of the embodied agents and talking faces that we have used for our research into human computer interaction. We focus on the techniques that were used and evaluate this with respect to the purpose that the agents and faces were to serve and the costs involved in producing and maintaining the software. We discuss the function of this research and development in relation to the educational programme of our graduate students

    Laughter and smiling facial expression modelling for the generation of virtual affective behavior

    Get PDF
    Laughter and smiling are significant facial expressions used in human to human communication. We present a computational model for the generation of facial expressions associated with laughter and smiling in order to facilitate the synthesis of such facial expressions in virtual characters. In addition, a new method to reproduce these types of laughter is proposed and validated using databases of generic and specific facial smile expressions. In particular, a proprietary database of laugh and smile expressions is also presented. This database lists the different types of classified and generated laughs presented in this work. The generated expressions are validated through a user study with 71 subjects, which concluded that the virtual character expressions built using the presented model are perceptually acceptable in quality and facial expression fidelity. Finally, for generalization purposes, an additional analysis shows that the results are independent of the type of virtual character’s appearance. © 2021 Mascaró et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited

    Enhancing Expressiveness of Speech through Animated Avatars for Instant Messaging and Mobile Phones

    Get PDF
    This thesis aims to create a chat program that allows users to communicate via an animated avatar that provides believable lip-synchronization and expressive emotion. Currently many avatars do not attempt to do lip-synchronization. Those that do are not well synchronized and have little or no emotional expression. Most avatars with lip synch use realistic looking 3D models or stylized rendering of complex models. This work utilizes images rendered in a cartoon style and lip-synchronization rules based on traditional animation. The cartoon style, as opposed to a more realistic look, makes the mouth motion more believable and the characters more appealing. The cartoon look and image-based animation (as opposed to a graphic model animated through manipulation of a skeleton or wireframe) also allows for fewer key frames resulting in faster speed with more room for expressiveness. When text is entered into the program, the Festival Text-to-Speech engine creates a speech file and extracts phoneme and phoneme duration data. Believable and fluid lip-synchronization is then achieved by means of a number of phoneme-to-image rules. Alternatively, phoneme and phoneme duration data can be obtained for speech dictated into a microphone using Microsoft SAPI and the CSLU Toolkit. Once lip synchronization has been completed, rules for non-verbal animation are added. Emotions are appended to the animation of speech in two ways: automatically, by recognition of key words and punctuation, or deliberately, by user-defined tags. Additionally, rules are defined for idle-time animation. Preliminary results indicate that the animated avatar program offers an improvement over currently available software. It aids in the understandability of speech, combines easily recognizable and expressive emotions with speech, and successfully enhances overall enjoyment of the chat experience. Applications for the program include use in cell phones for the deaf or hearing impaired, instant messaging, video conferencing, instructional software, and speech and animation synthesis

    A survey of comics research in computer science

    Full text link
    Graphical novels such as comics and mangas are well known all over the world. The digital transition started to change the way people are reading comics, more and more on smartphones and tablets and less and less on paper. In the recent years, a wide variety of research about comics has been proposed and might change the way comics are created, distributed and read in future years. Early work focuses on low level document image analysis: indeed comic books are complex, they contains text, drawings, balloon, panels, onomatopoeia, etc. Different fields of computer science covered research about user interaction and content generation such as multimedia, artificial intelligence, human-computer interaction, etc. with different sets of values. We propose in this paper to review the previous research about comics in computer science, to state what have been done and to give some insights about the main outlooks

    The Interactive Child Distress Screener: development and preliminary feasibility testing

    Get PDF
    Background Early identification of child emotional and behavioral concerns is essential for the prevention of mental health problems; however, few suitable child-reported screening measures are available. Digital tools offer an exciting opportunity for obtaining clinical information from the child’s perspective. Objective The aim of this study was to describe the initial development and pilot testing of the Interactive Child Distress Screener (ICDS). The ICDS is a Web-based screening instrument for the early identification of emotional and behavioral problems in children aged between 5 and 12 years. Methods This paper utilized a mixed-methods approach to (1) develop and refine item content using an expert review process (study 1) and (2) develop and refine prototype animations and an app interface using codesign with child users (study 2). Study 1 involved an iterative process that comprised the following four steps: (1) the initial development of target constructs, (2) preliminary content validation (face validity, item importance, and suitability for animation) from an expert panel of researchers and psychologists (N=9), (3) item refinement, and (4) a follow-up validation with the same expert panel. Study 2 also comprised four steps, which are as follows: (1) the development of prototype animations, (2) the development of the app interface and a response format, (3) child interviews to determine feasibility and obtain feedback, and (4) refinement of animations and interface. Cognitive interviews were conducted with 18 children aged between 4 and 12 years who tested 3 prototype animated items. Children were asked to describe the target behavior, how well the animations captured the intended behavior, and provide suggestions for improvement. Their ability to understand the wording of instructions was also assessed, as well as the general acceptability of character and sound design. Results In study 1, a revised list of 15 constructs was generated from the first and second round of expert feedback. These were rated highly in terms of importance (mean 6.32, SD 0.42) and perceived compatibility of items (mean 6.41, SD 0.45) on a 7-point scale. In study 2, overall feedback regarding the character design and sounds was positive. Children’s ability to understand intended behaviors varied according to target items, and feedback highlighted key objectives for improvements such as adding contextual cues or improving character detail. These design changes were incorporated through an iterative process, with examples presented. Conclusions The ICDS has potential to obtain clinical information from the child’s perspective that may otherwise be overlooked. If effective, the ICDS will provide a quick, engaging, and easy-to-use screener that can be utilized in routine care settings. This project highlights the importance of involving an expert review and user codesign in the development of digital assessment tools for children

    To Affinity and Beyond: Interactive Digital Humans as a Human Computer Interface

    Get PDF
    The field of human computer interaction is increasingly exploring the use of more natural, human-like user interfaces to build intelligent agents to aid in everyday life. This is coupled with a move to people using ever more realistic avatars to represent themselves in their digital lives. As the ability to produce emotionally engaging digital human representations is only just now becoming technically possible, there is little research into how to approach such tasks. This is due to both technical complexity and operational implementation cost. This is now changing as we are at a nexus point with new approaches, faster graphics processing and enabling new technologies in machine learning and computer vision becoming available. I articulate the issues required for such digital humans to be considered successfully located on the other side of the phenomenon known as the Uncanny Valley. My results show that a complex mix of perceived and contextual aspects affect the sense making on digital humans and highlights previously undocumented effects of interactivity on the affinity. Users are willing to accept digital humans as a new form of user interface and they react to them emotionally in previously unanticipated ways. My research shows that it is possible to build an effective interactive digital human that crosses the Uncanny Valley. I directly explore what is required to build a visually realistic digital human as a primary research question and I explore if such a realistic face provides sufficient benefit to justify the challenges involved in building it. I conducted a Delphi study to inform the research approaches and then produced a complex digital human character based on these insights. This interactive and realistic digital human avatar represents a major technical undertaking involving multiple teams around the world. Finally, I explored a framework for examining the ethical implications and signpost future research areas

    Look me in the eyes: A survey of eye and gaze animation for virtual agents and artificial systems

    Get PDF
    International audienceA person's emotions and state of mind are apparent in their face and eyes. As a Latin proverb states: "The face is the portrait of the mind; the eyes, its informers.". This presents a huge challenge for computer graphics researchers in the generation of artificial entities that aim to replicate the movement and appearance of the human eye, which is so important in human-human interactions. This State of the Art Report provides an overview of the efforts made on tackling this challenging task. As with many topics in Computer Graphics, a cross-disciplinary approach is required to fully understand the workings of the eye in the transmission of information to the user. We discuss the movement of the eyeballs, eyelids, and the head from a physiological perspective and how these movements can be modelled, rendered and animated in computer graphics applications. Further, we present recent research from psychology and sociology that seeks to understand higher level behaviours, such as attention and eye-gaze, during the expression of emotion or during conversation, and how they are synthesised in Computer Graphics and Robotics
    corecore