115,173 research outputs found

    A Real-Time System for Full Body Interaction with Virtual Worlds

    Get PDF
    International audienceReal-time video acquisition is becoming a reality with the most recent camera technology. Three-dimensional models can be reconstructed from multiple views using visual hull carving techniques. However the combination of these approaches to obtain a moving 3D model from simultaneous video captures remains a technological challenge. In this paper we demonstrate a complete system architecture allowing the real-time (≥ 30 fps) acquisition and full-body reconstruction of one or several actors, which can then be integrated in a virtual environment. A volume of approximately 2m3 is observed with (at least) four video cameras and the video fluxes are processed to obtain a volumetric model of the moving actors. The reconstruction process uses a mixture of pipelined and parallel processing, using N individual PCs for N cameras and a central computer for integration, reconstruction and display. A surface description is obtained using a marching cubes algorithm. We discuss the overall architecture choices, with particular emphasis on the real-time constraint and latency issues, and demonstrate that a software synchronization of the video fluxes is both sufficient and efficient. The ability to reconstruct a full-body model of the actors and any additional props or elements opens the way for very natural interaction techniques using the entire body and real elements manipulated by the user, whose avatar is immersed in a virtual world

    MetaSpace II: Object and full-body tracking for interaction and navigation in social VR

    Full text link
    MetaSpace II (MS2) is a social Virtual Reality (VR) system where multiple users can not only see and hear but also interact with each other, grasp and manipulate objects, walk around in space, and get tactile feedback. MS2 allows walking in physical space by tracking each user's skeleton in real-time and allows users to feel by employing passive haptics i.e., when users touch or manipulate an object in the virtual world, they simultaneously also touch or manipulate a corresponding object in the physical world. To enable these elements in VR, MS2 creates a correspondence in spatial layout and object placement by building the virtual world on top of a 3D scan of the real world. Through the association between the real and virtual world, users are able to walk freely while wearing a head-mounted device, avoid obstacles like walls and furniture, and interact with people and objects. Most current virtual reality (VR) environments are designed for a single user experience where interactions with virtual objects are mediated by hand-held input devices or hand gestures. Additionally, users are only shown a representation of their hands in VR floating in front of the camera as seen from a first person perspective. We believe, representing each user as a full-body avatar that is controlled by natural movements of the person in the real world (see Figure 1d), can greatly enhance believability and a user's sense immersion in VR.Comment: 10 pages, 9 figures. Video: http://living.media.mit.edu/projects/metaspace-ii

    Refining personal and social presence in virtual meetings

    Get PDF
    Virtual worlds show promise for conducting meetings and conferences without the need for physical travel. Current experience suggests the major limitation to the more widespread adoption and acceptance of virtual conferences is the failure of existing environments to provide a sense of immersion and engagement, or of ‘being there’. These limitations are largely related to the appearance and control of avatars, and to the absence of means to convey non-verbal cues of facial expression and body language. This paper reports on a study involving the use of a mass-market motion sensor (Kinect™) and the mapping of participant action in the real world to avatar behaviour in the virtual world. This is coupled with full-motion video representation of participant’s faces on their avatars to resolve both identity and facial expression issues. The outcomes of a small-group trial meeting based on this technology show a very positive reaction from participants, and the potential for further exploration of these concepts

    Exploring the Use of Virtual Worlds as a Scientific Research Platform: The Meta-Institute for Computational Astrophysics (MICA)

    Get PDF
    We describe the Meta-Institute for Computational Astrophysics (MICA), the first professional scientific organization based exclusively in virtual worlds (VWs). The goals of MICA are to explore the utility of the emerging VR and VWs technologies for scientific and scholarly work in general, and to facilitate and accelerate their adoption by the scientific research community. MICA itself is an experiment in academic and scientific practices enabled by the immersive VR technologies. We describe the current and planned activities and research directions of MICA, and offer some thoughts as to what the future developments in this arena may be.Comment: 15 pages, to appear in the refereed proceedings of "Facets of Virtual Environments" (FaVE 2009), eds. F. Lehmann-Grube, J. Sablating, et al., ICST Lecture Notes Ser., Berlin: Springer Verlag (2009); version with full resolution color figures is available at http://www.mica-vw.org/wiki/index.php/Publication

    Synthetic worlds, synthetic strategies: attaining creativity in the metaverse

    Get PDF
    This text will attempt to delineate the underlying theoretical premises and the definition of the output of an immersive learning approach pertaining to the visual arts to be implemented in online, three dimensional synthetic worlds. Deviating from the prevalent practice of the replication of physical art studio teaching strategies within a virtual environment, the author proposes instead to apply the fundamental tenets of Roy Ascott’s “Groundcourse”, in combination with recent educational approaches such as “Transformative Learning” and “Constructionism”. In an amalgamation of these educational approaches with findings drawn from the fields of Metanomics, Ludology, Cyberpsychology and Presence Studies, as well as an examination of creative practices manifest in the metaverse today, the formulation of a learning strategy for creative enablement unique to online, three dimensional synthetic worlds; one which will focus upon “Play” as well as Role Play, virtual Assemblage and the visual identity of the avatar within the pursuits, is being proposed in this chapter

    The benefits of using a walking interface to navigate virtual environments

    No full text
    Navigation is the most common interactive task performed in three-dimensional virtual environments (VEs), but it is also a task that users often find difficult. We investigated how body-based information about the translational and rotational components of movement helped participants to perform a navigational search task (finding targets hidden inside boxes in a room-sized space). When participants physically walked around the VE while viewing it on a head-mounted display (HMD), they then performed 90% of trials perfectly, comparable to participants who had performed an equivalent task in the real world during a previous study. By contrast, participants performed less than 50% of trials perfectly if they used a tethered HMD (move by physically turning but pressing a button to translate) or a desktop display (no body-based information). This is the most complex navigational task in which a real-world level of performance has been achieved in a VE. Behavioral data indicates that both translational and rotational body-based information are required to accurately update one's position during navigation, and participants who walked tended to avoid obstacles, even though collision detection was not implemented and feedback not provided. A walking interface would bring immediate benefits to a number of VE applications

    The design-by-adaptation approach to universal access: learning from videogame technology

    Get PDF
    This paper proposes an alternative approach to the design of universally accessible interfaces to that provided by formal design frameworks applied ab initio to the development of new software. This approach, design-byadaptation, involves the transfer of interface technology and/or design principles from one application domain to another, in situations where the recipient domain is similar to the host domain in terms of modelled systems, tasks and users. Using the example of interaction in 3D virtual environments, the paper explores how principles underlying the design of videogame interfaces may be applied to a broad family of visualization and analysis software which handles geographical data (virtual geographic environments, or VGEs). One of the motivations behind the current study is that VGE technology lags some way behind videogame technology in the modelling of 3D environments, and has a less-developed track record in providing the variety of interaction methods needed to undertake varied tasks in 3D virtual worlds by users with varied levels of experience. The current analysis extracted a set of interaction principles from videogames which were used to devise a set of 3D task interfaces that have been implemented in a prototype VGE for formal evaluation

    Emerging technologies for learning report (volume 3)

    Get PDF
    corecore