1,299 research outputs found

    A framework for realistic 3D tele-immersion

    Get PDF
    Meeting, socializing and conversing online with a group of people using teleconferencing systems is still quite differ- ent from the experience of meeting face to face. We are abruptly aware that we are online and that the people we are engaging with are not in close proximity. Analogous to how talking on the telephone does not replicate the experi- ence of talking in person. Several causes for these differences have been identified and we propose inspiring and innova- tive solutions to these hurdles in attempt to provide a more realistic, believable and engaging online conversational expe- rience. We present the distributed and scalable framework REVERIE that provides a balanced mix of these solutions. Applications build on top of the REVERIE framework will be able to provide interactive, immersive, photo-realistic ex- periences to a multitude of users that for them will feel much more similar to having face to face meetings than the expe- rience offered by conventional teleconferencing systems

    Natural User Interfaces for Virtual Character Full Body and Facial Animation in Immersive Virtual Worlds

    Get PDF
    In recent years, networked virtual environments have steadily grown to become a frontier in social computing. Such virtual cyberspaces are usually accessed by multiple users through their 3D avatars. Recent scientific activity has resulted in the release of both hardware and software components that enable users at home to interact with their virtual persona through natural body and facial activity performance. Based on 3D computer graphics methods and vision-based motion tracking algorithms, these techniques aspire to reinforce the sense of autonomy and telepresence within the virtual world. In this paper we present two distinct frameworks for avatar animation through user natural motion input. We specifically target the full body avatar control case using a Kinect sensor via a simple, networked skeletal joint retargeting pipeline, as well as an intuitive user facial animation 3D reconstruction pipeline for rendering highly realistic user facial puppets. Furthermore, we present a common networked architecture to enable multiple remote clients to capture and render any number of 3D animated characters within a shared virtual environment

    Autonomous agents and avatars in REVERIE’s virtual environment

    Get PDF
    In this paper, we describe the enactment of autonomous agents and avatars in the web-based social collaborative virtual environment of REVERIE that supports natural, human-like behavior, physical interaction and engagement. Represented by avatars, users feel immersed in this virtual world in which they can meet and share experiences as in real life. Like the avatars, autonomous agents that may act in this world are capable of demonstrating human-like non-verbal behavior and facilitate social interaction. We describe how reasoning components of the REVERIE system connect and cooperatively control autonomous agents and avatars representing a user

    SPA: Sparse Photorealistic Animation using a single RGB-D camera

    Get PDF
    Photorealistic animation is a desirable technique for computer games and movie production. We propose a new method to synthesize plausible videos of human actors with new motions using a single cheap RGB-D camera. A small database is captured in a usual office environment, which happens only once for synthesizing different motions. We propose a markerless performance capture method using sparse deformation to obtain the geometry and pose of the actor for each time instance in the database. Then, we synthesize an animation video of the actor performing the new motion that is defined by the user. An adaptive model-guided texture synthesis method based on weighted low-rank matrix completion is proposed to be less sensitive to noise and outliers, which enables us to easily create photorealistic animation videos with new motions that are different from the motions in the database. Experimental results on the public dataset and our captured dataset have verified the effectiveness of the proposed method

    MetaSpace II: Object and full-body tracking for interaction and navigation in social VR

    Full text link
    MetaSpace II (MS2) is a social Virtual Reality (VR) system where multiple users can not only see and hear but also interact with each other, grasp and manipulate objects, walk around in space, and get tactile feedback. MS2 allows walking in physical space by tracking each user's skeleton in real-time and allows users to feel by employing passive haptics i.e., when users touch or manipulate an object in the virtual world, they simultaneously also touch or manipulate a corresponding object in the physical world. To enable these elements in VR, MS2 creates a correspondence in spatial layout and object placement by building the virtual world on top of a 3D scan of the real world. Through the association between the real and virtual world, users are able to walk freely while wearing a head-mounted device, avoid obstacles like walls and furniture, and interact with people and objects. Most current virtual reality (VR) environments are designed for a single user experience where interactions with virtual objects are mediated by hand-held input devices or hand gestures. Additionally, users are only shown a representation of their hands in VR floating in front of the camera as seen from a first person perspective. We believe, representing each user as a full-body avatar that is controlled by natural movements of the person in the real world (see Figure 1d), can greatly enhance believability and a user's sense immersion in VR.Comment: 10 pages, 9 figures. Video: http://living.media.mit.edu/projects/metaspace-ii

    Developing a Workflow for Cross-platform 3D Apps using Game Engines

    Get PDF
    Cross-platform developing is not a new approach. However, considering it is common for developers to release an application exclusively for a single platform, the use of cross-platform developing is noticeably low. In this master’s thesis, aspects regarding the development of a cross-platform 3D application is examined and discussed. A comparision between different motion capture systems used for character animations is presented along with a pipeline for the process of creating a character to be used in a mobile application. This thesis also provides guidelines and recommendations for independent game developers.Multiplatformsutveckling Ă€r inte en ny strategi. Men med tanke pĂ„ hur vanligt förekommande det Ă€r för utvecklare att slĂ€ppa en applikation för endast en platform sĂ„ kan man konstatera att anvĂ€ndningen av multiplatformsutveckling Ă€r förvĂ„nansvĂ€rt liten. I detta examensarbete diskuteras och analyseras aspekter som rör utveckling av en multiplatformsbaserad 3D-applikation. En jĂ€mförelse av olika motion capture-system för framtagande av karaktĂ€rsanimationer kommer att presenteras tillsammans med ett arbetssĂ€tt för skapande av en karaktĂ€r för anvĂ€ndning i en mobil applikation. Detta examensarbete presenterar Ă€ven riktlinjer och rekommendationer riktade till oberoende spelutvecklare
    • 

    corecore