922 research outputs found

    Prominence Driven Character Animation

    Get PDF
    This paper details the development of a fully automated system for character animation implemented in Autodesk Maya. The system uses prioritised speech events to algorithmically generate head, body, arms and leg movements alongside eyeblinks, eyebrow movements and lip-synching. In addition, gaze tracking is also generated automatically relative to the definition of focus objects- contextually important objects in the character\u27s worldview. The plugin uses an animation profile to store the relevant controllers and movements for a specific character, allowing any character to run with the system. Once a profile has been created, an audio file can be loaded and animated with a single button click. The average time to animate is between 2-3 minutes for 1 minute of speech, and the plugin can be used either as a first pass system for high quality work or as part of a batch animation workflow for larger amounts of content as exemplified in television and online dissemination channels

    Robust Modeling of Epistemic Mental States

    Full text link
    This work identifies and advances some research challenges in the analysis of facial features and their temporal dynamics with epistemic mental states in dyadic conversations. Epistemic states are: Agreement, Concentration, Thoughtful, Certain, and Interest. In this paper, we perform a number of statistical analyses and simulations to identify the relationship between facial features and epistemic states. Non-linear relations are found to be more prevalent, while temporal features derived from original facial features have demonstrated a strong correlation with intensity changes. Then, we propose a novel prediction framework that takes facial features and their nonlinear relation scores as input and predict different epistemic states in videos. The prediction of epistemic states is boosted when the classification of emotion changing regions such as rising, falling, or steady-state are incorporated with the temporal features. The proposed predictive models can predict the epistemic states with significantly improved accuracy: correlation coefficient (CoERR) for Agreement is 0.827, for Concentration 0.901, for Thoughtful 0.794, for Certain 0.854, and for Interest 0.913.Comment: Accepted for Publication in Multimedia Tools and Application, Special Issue: Socio-Affective Technologie

    Tutor In-sight: Guiding and Visualizing Students Attention with Mixed Reality Avatar Presentation Tools

    Get PDF
    Remote conferencing systems are increasingly used to supplement or even replace in-person teaching. However, prevailing conferencing systems restrict the teacher’s representation to a webcam live-stream, hamper the teacher’s use of body-language, and result in students’ decreased sense of co-presence and participation. While Virtual Reality (VR) systems may increase student engagement, the teacher may not have the time or expertise to conduct the lecture in VR. To address this issue and bridge the requirements between students and teachers, we have developed Tutor In-sight, a Mixed Reality (MR) avatar augmented into the student’s workspace based on four design requirements derived from the existing literature, namely: integrated virtual with physical space, improved teacher’s co-presence through avatar, direct attention with auto-generated body language, and usable workfow for teachers. Two user studies were conducted from the perspectives of students and teachers to determine the advantages of Tutor In-sight in comparison to two existing conferencing systems, Zoom (video-based) and Mozilla Hubs (VR-based). The participants of both studies favoured Tutor In-sight. Among others, this main fnding indicates that Tutor Insight satisfed the needs of both teachers and students. In addition, the participants’ feedback was used to empirically determine the four main teacher requirements and the four main student requirements in order to improve the future design of MR educational tools

    Technology-mediated distortions: a review on the biases and misperceptions in employment interviews via computer, telephone and AI

    Get PDF
    Biases in job interviews threaten the objective evaluation of applicants. Similar and different biases exist also in mediated job interviews, where the communication between applicant and interviewer passes through a techno-logical software or hardware. This review synthetises the literature investigat-ing biases in job interviews conducted through telephone, videoconference, asynchronous videos or avatars. Moreover, this review reports perceptions ap-plicants and interviewers had of such modalities. Overall, applicants received lower ratings in mediated interviews compared to face-to-face ones. In fact, lack of nonverbal cues, bad audio/video quality, lags and non-neutral interview lo-cations hinder interviewers in performing objective assessments of applicants. Moreover, the appearance of avatars is another source of bias, as the charac-teristics of avatars merge with or override those of applicants. Regarding per-ceptions, interviewers and applicants expressed mainly negative perceptions. In particular, applicants were particularly concerned about privacy and fair-ness, with the latter being lower for mediated interviews. Furthermore, avatars accentuate biases of face-to-face interviews and can appear “creepy” to appli-cants. Finally, technological mediation presents other downsides, i.e., in-creased difficulty in the interviewer-applicant interaction and a rigid and imper-sonal process. Despite these biases, negative perceptions and downsides, technological mediation brings about simpler and more accessible interviews for applicants and recruiters, along with the chance of a greater level of inter-view standardisation. To solve the issues of mediated interviews, researchers suggest to do less interviews, pair them with other forms of assessment, stand-ardise interviews more, better inform applicants and make avatars able to transmit more characteristics of their operators, such as nonverbal cues.Biases in job interviews threaten the objective evaluation of applicants. Similar and different biases exist also in mediated job interviews, where the communication between applicant and interviewer passes through a techno-logical software or hardware. This review synthetises the literature investigat-ing biases in job interviews conducted through telephone, videoconference, asynchronous videos or avatars. Moreover, this review reports perceptions ap-plicants and interviewers had of such modalities. Overall, applicants received lower ratings in mediated interviews compared to face-to-face ones. In fact, lack of nonverbal cues, bad audio/video quality, lags and non-neutral interview lo-cations hinder interviewers in performing objective assessments of applicants. Moreover, the appearance of avatars is another source of bias, as the charac-teristics of avatars merge with or override those of applicants. Regarding per-ceptions, interviewers and applicants expressed mainly negative perceptions. In particular, applicants were particularly concerned about privacy and fair-ness, with the latter being lower for mediated interviews. Furthermore, avatars accentuate biases of face-to-face interviews and can appear “creepy” to appli-cants. Finally, technological mediation presents other downsides, i.e., in-creased difficulty in the interviewer-applicant interaction and a rigid and imper-sonal process. Despite these biases, negative perceptions and downsides, technological mediation brings about simpler and more accessible interviews for applicants and recruiters, along with the chance of a greater level of inter-view standardisation. To solve the issues of mediated interviews, researchers suggest to do less interviews, pair them with other forms of assessment, stand-ardise interviews more, better inform applicants and make avatars able to transmit more characteristics of their operators, such as nonverbal cues

    Presence 2005: the eighth annual international workshop on presence, 21-23 September, 2005 University College London (Conference proceedings)

    Get PDF
    OVERVIEW (taken from the CALL FOR PAPERS) Academics and practitioners with an interest in the concept of (tele)presence are invited to submit their work for presentation at PRESENCE 2005 at University College London in London, England, September 21-23, 2005. The eighth in a series of highly successful international workshops, PRESENCE 2005 will provide an open discussion forum to share ideas regarding concepts and theories, measurement techniques, technology, and applications related to presence, the psychological state or subjective perception in which a person fails to accurately and completely acknowledge the role of technology in an experience, including the sense of 'being there' experienced by users of advanced media such as virtual reality. The concept of presence in virtual environments has been around for at least 15 years, and the earlier idea of telepresence at least since Minsky's seminal paper in 1980. Recently there has been a burst of funded research activity in this area for the first time with the European FET Presence Research initiative. What do we really know about presence and its determinants? How can presence be successfully delivered with today's technology? This conference invites papers that are based on empirical results from studies of presence and related issues and/or which contribute to the technology for the delivery of presence. Papers that make substantial advances in theoretical understanding of presence are also welcome. The interest is not solely in virtual environments but in mixed reality environments. Submissions will be reviewed more rigorously than in previous conferences. High quality papers are therefore sought which make substantial contributions to the field. Approximately 20 papers will be selected for two successive special issues for the journal Presence: Teleoperators and Virtual Environments. PRESENCE 2005 takes place in London and is hosted by University College London. The conference is organized by ISPR, the International Society for Presence Research and is supported by the European Commission's FET Presence Research Initiative through the Presencia and IST OMNIPRES projects and by University College London

    A Review of Verbal and Non-Verbal Human-Robot Interactive Communication

    Get PDF
    In this paper, an overview of human-robot interactive communication is presented, covering verbal as well as non-verbal aspects of human-robot interaction. Following a historical introduction, and motivation towards fluid human-robot communication, ten desiderata are proposed, which provide an organizational axis both of recent as well as of future research on human-robot communication. Then, the ten desiderata are examined in detail, culminating to a unifying discussion, and a forward-looking conclusion

    Accessible options for deaf people in e-Learning platforms: technology solutions for sign language translation

    Get PDF
    AbstractThis paper presents a study on potential technology solutions for enhancing the communication process for deaf people on e-learning platforms through translation of Sign Language (SL). Considering SL in its global scope as a spatial-visual language not limited to gestures or hand/forearm movement, but also to other non-dexterity markers such as facial expressions, it is necessary to ascertain whether the existing technology solutions can be effective options for the SL integration on e-learning platforms. Thus, we aim to present a list of potential technology options for the recognition, translation and presentation of SL (and potential problems) through the analysis of assistive technologies, methods and techniques, and ultimately to contribute for the development of the state of the art and ensure digital inclusion of the deaf people in e-learning platforms. The analysis show that some interesting technology solutions are under research and development to be available for digital platforms in general, but yet some critical challenges must solved and an effective integration of these technologies in e-learning platforms in particular is still missing

    Nonverbal communication in virtual reality: Nodding as a social signal in virtual interactions

    Get PDF
    Nonverbal communication is an important part of human communication, including head nodding, eye gaze, proximity and body orientation. Recent research has identified specific patterns of head nodding linked to conversation, namely mimicry of head movements at 600 ms delay and fast nodding when listening. In this paper, we implemented these head nodding behaviour rules in virtual humans, and we tested the impact of these behaviours, and whether they lead to increases in trust and liking towards the virtual humans. We use Virtual Reality technology to simulate a face-to-face conversation, as VR provides a high level of immersiveness and social presence, very similar to face-to-face interaction. We then conducted a study with human-subject participants, where the participants took part in conversations with two virtual humans and then rated the virtual character social characteristics, and completed an evaluation of their implicit trust in the virtual human. Results showed more liking for and more trust in the virtual human whose nodding behaviour was driven by realistic behaviour rules. This supports the psychological models of nodding and advances our ability to build realistic virtual humans

    Nonverbal communication in virtual reality: Nodding as a social signal in virtual interactions

    Get PDF
    Nonverbal communication is an important part of human communication, including head nodding, eye gaze, proximity and body orientation. Recent research has identified specific patterns of head nodding linked to conversation, namely mimicry of head movements at 600 ms delay and fast nodding when listening. In this paper, we implemented these head nodding behaviour rules in virtual humans, and we tested the impact of these behaviours, and whether they lead to increases in trust and liking towards the virtual humans. We use Virtual Reality technology to simulate a face-to-face conversation, as VR provides a high level of immersiveness and social presence, very similar to face-to-face interaction. We then conducted a study with human-subject participants, where the participants took part in conversations with two virtual humans and then rated the virtual character social characteristics, and completed an evaluation of their implicit trust in the virtual human. Results showed more liking for and more trust in the virtual human whose nodding behaviour was driven by realistic behaviour rules. This supports the psychological models of nodding and advances our ability to build realistic virtual humans
    • …
    corecore