52,893 research outputs found
Teaching Virtual Characters to use Body Language
Non-verbal communication, or âbody languageâ, is a critical component in constructing believable virtual characters. Most often, body language is implemented by a set of ad-hoc rules.We propose a new method for authors to specify and refine their characterâs body-language responses. Using our method, the author watches the character acting in a situation, and provides simple feedback on-line. The character then learns to use its body language to maximize the rewards, based on a reinforcement learning algorithm
Autonomous Secondary Gaze Behaviours
In this paper we describe secondary behaviour, this is behaviour that is generated autonomously for an avatar. The user will control various aspects of the avatars behaviour but a truly expressive avatar must produce more complex behaviour than a user could specify in real time. Secondary behaviour provides some of this expressive behaviour autonomously. However, though it is produced autonomously it must produce behaviour that is appropriate to the actions that the user is controlling (the primary behaviour) and it must produce behaviour that corresponds to what the user wants. We describe an architecture which achieves these to aims by tagging the primary behaviour
with messages to be sent to the secondary behaviour and by allowing the user to design various aspects of the secondary behaviour before starting to use the avatar. We have implemented this general architecture in a system which adds gaze behaviour to user designed actions
Attack on the clones: managing player perceptions of visual variety and believability in video game crowds
Crowds of non-player characters are increasingly common in contemporary video games. It is often the case that individual models are re-used, lowering visual variety in the crowd and potentially affecting realism and believability. This paper explores a number of approaches to increase visual diversity in large game crowds, and discusses a procedural solution for generating diverse non-player character models. This is evaluated using mixed methods, including a âclone spottingâ activity and measurement of impact on computational overheads, in order to present a multi-faceted and adjustable solution to increase believability and variety in video game crowds
Presenting in Virtual Worlds: An Architecture for a 3D Anthropomorphic Presenter
Multiparty-interaction technology is changing entertainment, education, and training. Deployed examples of such technology include embodied agents and robots that act as a museum guide, a news presenter, a teacher, a receptionist, or someone trying to sell you insurance, homes, or tickets. In all these cases, the embodied agent needs to explain and describe. This article describes the design of a 3D virtual presenter that uses different output channels (including speech and animation of posture, pointing, and involuntary movements) to present and explain. The behavior is scripted and synchronized with a 2D display containing associated text and regions (slides, drawings, and paintings) at which the presenter can point. This article is part of a special issue on interactive entertainment
- âŚ