2,957 research outputs found
Natural Walking in Virtual Reality:A Review
Recent technological developments have finally brought virtual reality (VR) out of the laboratory and into the hands of developers and consumers. However, a number of challenges remain. Virtual travel is one of the most common and universal tasks performed inside virtual environments, yet enabling users to navigate virtual environments is not a trivial challenge—especially if the user is walking. In this article, we initially provide an overview of the numerous virtual travel techniques that have been proposed prior to the commercialization of VR. Then we turn to the mode of travel that is the most difficult to facilitate, that is, walking. The challenge of providing users with natural walking experiences in VR can be divided into two separate, albeit related, challenges: (1) enabling unconstrained walking in virtual worlds that are larger than the tracked physical space and (2) providing users with appropriate multisensory stimuli in response to their interaction with the virtual environment. In regard to the first challenge, we present walking techniques falling into three general categories: repositioning systems, locomotion based on proxy gestures, and redirected walking. With respect to multimodal stimuli, we focus on how to provide three types of information: external sensory information (visual, auditory, and cutaneous), internal sensory information (vestibular and kinesthetic/proprioceptive), and efferent information. Finally, we discuss how the different categories of walking techniques compare and discuss the challenges still facing the research community.</jats:p
Presenting in Virtual Worlds: An Architecture for a 3D Anthropomorphic Presenter
Multiparty-interaction technology is changing entertainment, education, and training. Deployed examples of such technology include embodied agents and robots that act as a museum guide, a news presenter, a teacher, a receptionist, or someone trying to sell you insurance, homes, or tickets. In all these cases, the embodied agent needs to explain and describe. This article describes the design of a 3D virtual presenter that uses different output channels (including speech and animation of posture, pointing, and involuntary movements) to present and explain. The behavior is scripted and synchronized with a 2D display containing associated text and regions (slides, drawings, and paintings) at which the presenter can point. This article is part of a special issue on interactive entertainment
- …