9,598 research outputs found

    On combining the facial movements of a talking head

    Get PDF
    We present work on Obie, an embodied conversational agent framework. An embodied conversational agent, or talking head, consists of three main components. The graphical part consists of a face model and a facial muscle model. Besides the graphical part, we have implemented an emotion model and a mapping from emotions to facial expressions. The animation part of the framework focuses on the combination of different facial movements temporally. In this paper we propose a scheme of combining facial movements on a 3D talking head

    Real Time Animation of Virtual Humans: A Trade-off Between Naturalness and Control

    Get PDF
    Virtual humans are employed in many interactive applications using 3D virtual environments, including (serious) games. The motion of such virtual humans should look realistic (or ‘natural’) and allow interaction with the surroundings and other (virtual) humans. Current animation techniques differ in the trade-off they offer between motion naturalness and the control that can be exerted over the motion. We show mechanisms to parametrize, combine (on different body parts) and concatenate motions generated by different animation techniques. We discuss several aspects of motion naturalness and show how it can be evaluated. We conclude by showing the promise of combinations of different animation paradigms to enhance both naturalness and control

    A Practical and Configurable Lip Sync Method for Games

    Full text link

    Multimodal animation control

    Get PDF
    Thesis (M.Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2003.Includes bibliographical references (leaf 44).In this thesis, we present a multimodal animation control system. Our approach is based on a human-centric computing model proposed by Project Oxygen at MIT Laboratory for Computer Science. Our system allows the user to create and control animation in real time using the speech interface developed using SpeechBuilder. The user can also fall back to traditional input modes should the speech interface fail. We assume that the user has no prior knowledge and experience in animation and yet enable him to create interesting and meaningful animation naturally and fluently. We argue that our system can be used in a number of applications ranging from PowerPoint presentations to simulations to children's storytelling tools.by Hana Kim.M.Eng

    Criminalizing songs and symbols in Scottish football:how anti-sectarian legislation has created a new ‘sectarian’ divide in Scotland

    Get PDF
    Since the 1990s, the regulation of football fans has increasingly shifted from the policing of actions to the policing of words. With this in mind, this article looks at the impact of the anti-sectarian ‘industry’ in Scotland. In particular, it looks at the impact that legislation in Scotland, that criminalized football fans’ songs and chants, has had on Glasgow Celtic, and especially Glasgow Rangers, supporters. The article is based on participatory action research with football supporters in Glasgow who were opposing the Offensive Behaviour at Football Bill, in 2011. Through this work, two issues became necessary to address; firstly, the impact of the anti-sectarian ‘industry’ in Scotland, which has grown precisely at a time when sectarianism appears to be declining, and secondly, the emergence of a new tension, divide or form of intolerance, which is developing amongst fans (particularly Glasgow Rangers fans), that has been created by this anti-sectarian industry

    commanimation: Creating and managing animations via speech

    Get PDF
    A speech controlled animation system is both a useful application program as well as a laboratory in which to investigate context aware applications as well as controlling errors. The user need not have prior knowledge or experience in animation and is yet able to create interesting and meaningful animation naturally and fluently. The system can be used in a number of applications ranging from PowerPoint presentations to simulations to children’s storytelling tools.Singapore-MIT Alliance (SMA

    A study on virtual reality and developing the experience in a gaming simulation

    Get PDF
    A thesis submitted to the University of Bedfordshire in partial fulfilment of the requirements for the degree of Masters by ResearchVirtual Reality (VR) is an experience where a person is provided with the freedom of viewing and moving in a virtual world [1]. The experience is not constrained to a limited control. Here, it was triggered interactively according to the user’s physical movement [1] [2]. So the user feels as if they are seeing the real world; also, 3D technologies allow the viewer to experience the volume of the object and its prospection in the virtual world [1]. The human brain generates the depth when each eye receives the images in its point of view. For learning for and developing the project using the university’s facilities, some of the core parts of the research have been accomplished, such as designing the VR motion controller and VR HMD (Head Mount Display), using an open source microcontroller. The VR HMD with the VR controller gives an immersive feel and a complete VR system [2]. The motive was to demonstrate a working model to create a VR experience on a mobile platform. Particularly, the VR system uses a micro electro-mechanical system to track motion without a tracking camera. The VR experience has also been developed in a gaming simulation. To produce this, Maya, Unity, Motion Analysis System, MotionBuilder, Arduino and programming have been used. The lessons and codes taken or improvised from [33] [44] [25] and [45] have been studied and implemented
    corecore