295 research outputs found

    To Affinity and Beyond: Interactive Digital Humans as a Human Computer Interface

    Get PDF
    The field of human computer interaction is increasingly exploring the use of more natural, human-like user interfaces to build intelligent agents to aid in everyday life. This is coupled with a move to people using ever more realistic avatars to represent themselves in their digital lives. As the ability to produce emotionally engaging digital human representations is only just now becoming technically possible, there is little research into how to approach such tasks. This is due to both technical complexity and operational implementation cost. This is now changing as we are at a nexus point with new approaches, faster graphics processing and enabling new technologies in machine learning and computer vision becoming available. I articulate the issues required for such digital humans to be considered successfully located on the other side of the phenomenon known as the Uncanny Valley. My results show that a complex mix of perceived and contextual aspects affect the sense making on digital humans and highlights previously undocumented effects of interactivity on the affinity. Users are willing to accept digital humans as a new form of user interface and they react to them emotionally in previously unanticipated ways. My research shows that it is possible to build an effective interactive digital human that crosses the Uncanny Valley. I directly explore what is required to build a visually realistic digital human as a primary research question and I explore if such a realistic face provides sufficient benefit to justify the challenges involved in building it. I conducted a Delphi study to inform the research approaches and then produced a complex digital human character based on these insights. This interactive and realistic digital human avatar represents a major technical undertaking involving multiple teams around the world. Finally, I explored a framework for examining the ethical implications and signpost future research areas

    Actors, Avatars and Agents: Potentials and Implications of Natural Face Technology for the Creation of Realistic Visual Presence

    Get PDF
    We are on the cusp of creating realistic, interactive, fully rendered human faces on computers that transcend the “uncanny valley,” widely known for capturing the phenomenon of “eeriness” in faces that are almost, but not fully realistic. Because humans are hardwired to respond to faces in uniquely positive ways, artificial realistic faces hold great promise for advancing human interaction with machines. For example, realistic avatars will enable presentation of human actors in virtual collaboration settings with new levels of realism; artificial natural faces will allow the embodiment of cognitive agents, such as Amazon’s Alexa or Apple’s Siri, putting us on a path to create “artificial human” entities in the near future. In this conceptual paper, we introduce natural face technology (NFT) and its potential for creating realistic visual presence (RVP), a sensation of presence in interaction with a digital actor, as if present with another human. We contribute a forward-looking research agenda to information systems (IS) research, comprising terminology, early conceptual work, concrete ideas for research projects, and a broad range of research questions for engaging with this emerging, transformative technology as it becomes available for application. By doing so, we respond to calls for “blue ocean research” that explores unchartered territory and makes a novel technology accessible to IS early in its application. We outline promising areas of application and foreshadow philosophical, ethical, and conceptual questions for IS research pertaining to the more speculative phenomena of “living with artificial humans.

    Visual Fidelity Effects on Expressive Self-avatar in Virtual Reality: First Impressions Matter

    Get PDF
    Owning a virtual body inside Virtual Reality (VR) offers a unique experience where, typically, users are able to control their self- avatar’s body via tracked VR controllers. However, controlling a self-avatar’s facial movements is harder due to the HMD being in the way for tracking. In this work we present (1) the technical pipeline of creating and rigging self-alike avatars, whose facial expressions can be then controlled by users wearing the VIVE Pro Eye and VIVE Facial Tracker, and (2) based on this setting, two within-group studies on the psychological impact of the appearance realism of self- avatars, both the level of photorealism and self-likeness. Participants were told to practise their presentation, in front of a mirror, in the body of a realistic looking avatar and a cartoon like one, both animated with body and facial mocap data. In study 1 we made two bespoke self-alike avatars for each participant and we found that although participants found the cartoon-like character more attractive, they reported higher Body Ownership with whichever the avatar they had in the first trial. In study 2 we used generic avatars with higher fidelity facial animation, and found a similar “first trial effect” where they reported the avatar from their first trial being less creepy. Our results also suggested participants found the facial expressions easier to control with the cartoon-like character. Further, our eye-tracking data suggested that although participants were mainly facing their avatar during their presentation, their eye- gaze were focused elsewhere half of the time

    The influence of dynamics and speech on understanding humanoid facial expressions

    Get PDF
    Human communication relies mostly on nonverbal signals expressed through body language. Facial expressions, in particular, convey emotional information that allows people involved in social interactions to mutually judge the emotional states and to adjust its behavior appropriately. First studies aimed at investigating the recognition of facial expressions were based on static stimuli. However, facial expressions are rarely static, especially in everyday social interactions. Therefore, it has been hypothesized that the dynamics inherent in a facial expression could be fundamental in understanding its meaning. In addition, it has been demonstrated that nonlinguistic and linguistic information can contribute to reinforce the meaning of a facial expression making it easier to be recognized. Nevertheless, few studies have been performed on realistic humanoid robots. This experimental work aimed at demonstrating the human-like expressive capability of a humanoid robot by examining whether the effect of motion and vocal content influenced the perception of its facial expressions. The first part of the experiment aimed at studying the recognition capability of two kinds of stimuli related to the six basic expressions (i.e. anger, disgust, fear, happiness, sadness, and surprise): static stimuli, that is, photographs, and dynamic stimuli, that is, video recordings. The second and third parts were focused on comparing the same six basic expressions performed by a virtual avatar and by a physical robot under three different conditions: (1) muted facial expressions, (2) facial expressions with nonlinguistic vocalizations, and (3) facial expressions with an emotionally neutral verbal sentence. The results show that static stimuli performed by a human being and by the robot were more ambiguous than the corresponding dynamic stimuli on which motion and vocalization were associated. This hypothesis has been also investigated with a 3-dimensional replica of the physical robot demonstrating that even in case of a virtual avatar, dynamic and vocalization improve the emotional conveying capability

    Facial Expression Rendering in Medical Training Simulators: Current Status and Future Directions

    Get PDF
    Recent technological advances in robotic sensing and actuation methods have prompted development of a range of new medical training simulators with multiple feedback modalities. Learning to interpret facial expressions of a patient during medical examinations or procedures has been one of the key focus areas in medical training. This paper reviews facial expression rendering systems in medical training simulators that have been reported to date. Facial expression rendering approaches in other domains are also summarized to incorporate the knowledge from those works into developing systems for medical training simulators. Classifications and comparisons of medical training simulators with facial expression rendering are presented, and important design features, merits and limitations are outlined. Medical educators, students and developers are identified as the three key stakeholders involved with these systems and their considerations and needs are presented. Physical-virtual (hybrid) approaches provide multimodal feedback, present accurate facial expression rendering, and can simulate patients of different age, gender and ethnicity group; makes it more versatile than virtual and physical systems. The overall findings of this review and proposed future directions are beneficial to researchers interested in initiating or developing such facial expression rendering systems in medical training simulators.This work was supported by the Robopatient project funded by the EPSRC Grant No EP/T00519X/

    Actor & Avatar: A Scientific and Artistic Catalog

    Get PDF
    What kind of relationship do we have with artificial beings (avatars, puppets, robots, etc.)? What does it mean to mirror ourselves in them, to perform them or to play trial identity games with them? Actor & Avatar addresses these questions from artistic and scholarly angles. Contributions on the making of "technical others" and philosophical reflections on artificial alterity are flanked by neuroscientific studies on different ways of perceiving living persons and artificial counterparts. The contributors have achieved a successful artistic-scientific collaboration with extensive visual material

    Digital Manipulation of Human Faces: Effects on Emotional Perception and Brain Activity

    Get PDF
    The study of human face-processing has granted insight into key adaptions across various social and biological functions. However, there is an overall lack of consistency regarding digital alteration styles of human-face stimuli. In order to investigate this, two independent studies were conducted examining unique effects of image construction and presentation. In the first study, three primary forms of stimuli presentation styles (color, black and white, cutout) were used across iterations of non-thatcherized/thatcherized and non-inverted/inverted presentations. Outcome measures included subjective reactions measured via ratings of perceived “grotesqueness,” and objective outcomes of N170 event-related potentials (ERPs) measured via encephalography. Results of subjective measures indicated that thatcherized images were associated with an increased level of grotesque perception, regardless of overall condition variant and inversion status. A significantly larger N170 component was found in response to cutout-style images of human faces, thatcherized images, and inverted images. Results suggest that cutout image morphology may be considered a well-suited image presentation style when examining ERPs and facial processing of otherwise unaltered human faces. Moreover, less emphasis can be placed on decision making regarding main condition morphology of human face stimuli as it relates to negatively valent reactions. The second study explored commonalities between thatcherized and uncanny images. The purpose of the study was to explore commonalities between these two styles of digital manipulation and establish a link between previously disparate areas of human-face processing research. Subjective reactions to stimuli were measured via participant ratings of “off-putting.” ERP data were gathered in order to explore if any unique effects emerged via N170 and N400 presentations. Two main “morph continuums” of stimuli, provided by Eduard Zell (see Zell et al., 2015), with uncanny features were utilized. A novel approach of thatcherizing images along these continuums was used. thatcherized images across both continuums were regarded as more off-putting than non-thatcherized images, indicating a robust subjective effect of thatcherization that was relatively unimpacted by additional manipulation of key featural components. Conversely, results from brain activity indicated no significant differences of N170 between level of shape stylization and their thatcherized counterparts. Unique effects between continuums and exploratory N400 results are discussed

    Implications of the uncanny valley of avatars and virtual characters for human-computer interaction

    Get PDF
    Technological innovations made it possible to create more and more realistic figures. Such figures are often created according to human appearance and behavior allowing interaction with artificial systems in a natural and familiar way. In 1970, the Japanese roboticist Masahiro Mori observed, however, that robots and prostheses with a very - but not perfect - human-like appearance can elicit eerie, uncomfortable, and even repulsive feelings. While real people or stylized figures do not seem to evoke such negative feelings, human depictions with only minor imperfections fall into the "uncanny valley," as Mori put it. Today, further innovations in computer graphics led virtual characters into the uncanny valley. Thus, they have been subject of a number of disciplines. For research, virtual characters created by computer graphics are particularly interesting as they are easy to manipulate and, thus, can significantly contribute to a better understanding of the uncanny valley and human perception. For designers and developers of virtual characters such as in animated movies or games, it is important to understand how the appearance and human-likeness or virtual realism influence the experience and interaction of the user and how they can create believable and acceptable avatars and virtual characters despite the uncanny valley. This work investigates these aspects and is the next step in the exploration of the uncanny valley. This dissertation presents the results of nine studies examining the effects of the uncanny valley on human perception, how it affects interaction with computing systems, which cognitive processes are involved, and which causes may be responsible for the phenomenon. Furthermore, we examine not only methods for avoiding uncanny or unpleasant effects but also the preferred characteristics of virtual faces. We bring the uncanny valley into context with related phenomena causing similar effects. By exploring the eeriness of virtual animals, we found evidence that the uncanny valley is not only related to the dimension of human-likeness, which significantly change our view on the phenomenon. Furthermore, using advanced hand tracking and virtual reality technologies, we discovered that avatar realism is connected to other factors, which are related to the uncanny valley and depend on avatar realism. Affinity with the virtual ego and the feeling of presence in the virtual world were also affected by gender and deviating body structures such as a reduced number of fingers. Considering the performance while typing on keyboards in virtual reality, we also found that the perception of the own avatar depends on the user's individual task proficiencies. This thesis concludes with implications that not only extends existing knowledge about virtual characters, avatars and the uncanny valley but also provide new design guidelines for human-computer interaction and virtual reality

    A study of how the technological advancements in capturing believable facial emotion in Computer Generated (CG) characters in film has facilitated crossing the uncanny valley

    Get PDF
    A Research Report submitted in partial fulfilment of the requirement for the Degree of Masters of Arts in Digital Animation at the University of the Witwatersrand (School of Digital Arts) Johannesburg, South AfricaIn recent years, the quest for capturing authentic emotion convincingly in computer generated (CG) characters to assist exceedingly complex narrative expressions in modern cinema has intensified. Conveying human emotion in a digital human-like character is widely accepted to be the most challenging and elusive task for even the most skilled animators. Contemporary filmmakers have increasingly looked to complex digital tools that essentially manipulate the visual design of cinema through innovative techniques to reach levels of undetectable integration of CG characters. In trying to assess how modern cinema is pursuing the realistic integration of CG human-like characters in digital film with frenetic interest despite the risk of box office failure associated with the uncanny valley, this report focuses on the progress of the advances in the technique of facial motion capture. The uncanny valley hypothesis, based on a theory by Sigmund Freud, was coined in 1970 by Japanese robotics professor, Masahiro Mori. Mori suggested that people are increasingly comfortable with robots the more human-like they appear, but only up to a point. At that turning point, when the robot becomes too human-like, it arouses feelings of repulsion. When movement is added to this equation, viewers’ sense of the uncanny is heightened when the movement is deemed to be unreal. Motion capture is the technique of mimicking and capturing realistic movement by utilising technology that enables the process of translating a live actor’s performance into a digital performance. By capturing and transferring the data collected from sensors placed on a body suit or tracked from a high definition video, computer artists are able to drive the movement of a corresponding CG character in a 3-Dimensional (3D) programme. The attention of this study is narrowed to the progress of the techniques developed during a prolific decade for facial motion capture in particular. Regardless of the conflicting discourse surrounding the use of motion capture technology, these phenomenal improvements have allowed filmmakers to overcome that aspect of the uncanny valley associated with detecting realistic movement and facial expression. The progress of facial motion capture is investigated through the lens of selected films released during the period of 2001 to 2012. The two case studies, The Curious Case of Benjamin Button (2008) and Avatar (2009) were chosen for their individual achievement and innovative techniques that introduced new methods of facial capture. Digital images are said to undermine the reality status of cinematic images by challenging the foundation of long held theories of cinematic realist theory. These theories rooted in the indexical basis of photography, have proved to be the origin of contemporary viewers' notion of cinematic realism. However, the relationship between advanced digital effects and modern cinematic realism has created a perceptual complexity that warrants closer scrutiny. In addressing the paradoxical effect that photo-real cinematic realism is having on the basic comprehension of realism in film, the history of the seminal claims made by recognized realist film theorists is briefly examined

    An Investigation into the uncanny: character design, behaviour and context.

    Get PDF
    Whilst there has been a substantial amount of research into the uncanny valley, defining research that contextualises a character as they would normally be viewed remains an unexplored area. Often previous research focused solely on realistic render styles giving characters an unfair basis that tended towards the realistic, thus facilitating only one mode of animation style: realism. Furthermore, characters were not contextualized because researchers often used footage from previous productions. These characters also differed in quality as various artists worked on different productions. This research considers characterisation as three key components, the aesthetic, the behaviour and the contextualisation. Attempts were made to develop a greater understanding of how these components contribute to the appeal of a character within the field of 3D computer animation. Research consisted of two experiments. Both experiments were conducted using an online survey method. The first experiment used five different characters ranging from realistic to abstract. Each character displayed three different behaviours and the characters were contextualized within a six panel narrative. Data obtained from the first experiment was used to refine the second experiment. A further experiment was conducted to further define how combinations of different behaviours and the context containing a character affected the subject’s perception. The second experiment used three different character types and the characters were contextualized within a video stimulus. Findings from the first experiment indicated a strong relationship between character type and context. Interest with the various characters changed depending on adaptions to either the behaviour of the said character or the contextualisation. Certain character types based on appearance where better suited to different contexts than others. An abstract character was more likely to be perceived positively by the subject in a surprising context stipulated by the behaviour of the character and form of the narrative sequence. Other characters such as one based around an inanimate object found a greater positive reception with the subjects under sad contextual constraints rather than happy or surprise. The first experiment took into account various independent variables obtained from the subject and aimed to draw parallels if found between these variables and the subjects perception of a given character be it positive or negative. However, these variables namely gender, nationality and age had no effect on the subject’s perception. In the second experiment, it was found that in order for the realistic human character to be perceived more positively, the behaviour needed to match the context. When a mismatch occurred the subjects began to perceive the character more negatively. The cartoon character was however not affected by the mismatch of behaviour and context. The experiment was further expanded when two different character types were compared committing negative actions and having negative actions inflicted upon them and what effect it had on the subjects perception. It was found that a cartoon character committing a negative action was perceived positively whilst a human character committing the same act was perceived negatively. However, when a negative action was inflicted on these same characters, subjects were more concerned for the human character than the cartoon character. Results from both experiments confirm the idea that various characters are perceived very differently by the viewers and come with predefined notions within the viewer of how they should behave. What is expected of one character type is not acceptable for another character type. Cartoon characters can get away with bizarre behaviour. A real human character may have some sort of novel unusual behaviour, whilst a realistic CG human character is assessed on how realistically (normally) it behaves. This research expands upon previous research into this area by offering a greater understanding of character types and emphasising the importance of contextualisation
    • 

    corecore