15,203 research outputs found

    Object Manipulation in Virtual Reality Under Increasing Levels of Translational Gain

    Get PDF
    Room-scale Virtual Reality (VR) has become an affordable consumer reality, with applications ranging from entertainment to productivity. However, the limited physical space available for room-scale VR in the typical home or office environment poses a significant problem. To solve this, physical spaces can be extended by amplifying the mapping of physical to virtual movement (translational gain). Although amplified movement has been used since the earliest days of VR, little is known about how it influences reach-based interactions with virtual objects, now a standard feature of consumer VR. Consequently, this paper explores the picking and placing of virtual objects in VR for the first time, with translational gains of between 1x (a one-to-one mapping of a 3.5m*3.5m virtual space to the same sized physical space) and 3x (10.5m*10.5m virtual mapped to 3.5m*3.5m physical). Results show that reaching accuracy is maintained for up to 2x gain, however going beyond this diminishes accuracy and increases simulator sickness and perceived workload. We suggest gain levels of 1.5x to 1.75x can be utilized without compromising the usability of a VR task, significantly expanding the bounds of interactive room-scale VR

    MetaSpace II: Object and full-body tracking for interaction and navigation in social VR

    Full text link
    MetaSpace II (MS2) is a social Virtual Reality (VR) system where multiple users can not only see and hear but also interact with each other, grasp and manipulate objects, walk around in space, and get tactile feedback. MS2 allows walking in physical space by tracking each user's skeleton in real-time and allows users to feel by employing passive haptics i.e., when users touch or manipulate an object in the virtual world, they simultaneously also touch or manipulate a corresponding object in the physical world. To enable these elements in VR, MS2 creates a correspondence in spatial layout and object placement by building the virtual world on top of a 3D scan of the real world. Through the association between the real and virtual world, users are able to walk freely while wearing a head-mounted device, avoid obstacles like walls and furniture, and interact with people and objects. Most current virtual reality (VR) environments are designed for a single user experience where interactions with virtual objects are mediated by hand-held input devices or hand gestures. Additionally, users are only shown a representation of their hands in VR floating in front of the camera as seen from a first person perspective. We believe, representing each user as a full-body avatar that is controlled by natural movements of the person in the real world (see Figure 1d), can greatly enhance believability and a user's sense immersion in VR.Comment: 10 pages, 9 figures. Video: http://living.media.mit.edu/projects/metaspace-ii

    Human motion modeling and simulation by anatomical approach

    Get PDF
    To instantly generate desired infinite realistic human motion is still a great challenge in virtual human simulation. In this paper, the novel emotion effected motion classification and anatomical motion classification are presented, as well as motion capture and parameterization methods. The framework for a novel anatomical approach to model human motion in a HTR (Hierarchical Translations and Rotations) file format is also described. This novel anatomical approach in human motion modelling has the potential to generate desired infinite human motion from a compact motion database. An architecture for the real-time generation of new motions is also propose

    Agent Street: An Environment for Exploring Agent-Based Models in Second Life

    Get PDF
    Urban models can be seen on a continuum between iconic and symbolic. Generally speaking, iconic models are physical versions of the real world at some scaled down representation, while symbolic models represent the system in terms of the way they function replacing the physical or material system by some logical and/or mathematical formulae. Traditionally iconic and symbolic models were distinct classes of model but due to the rise of digital computing the distinction between the two is becoming blurred, with symbolic models being embedded into iconic models. However, such models tend to be single user. This paper demonstrates how 3D symbolic models in the form of agent-based simulations can be embedded into iconic models using the multi-user virtual world of Second Life. Furthermore, the paper demonstrates Second Life\'s potential for social science simulation. To demonstrate this, we first introduce Second Life and provide two exemplar models; Conway\'s Game of Life, and Schelling\'s Segregation Model which highlight how symbolic models can be viewed in an iconic environment. We then present a simple pedestrian evacuation model which merges the iconic and symbolic together and extends the model to directly incorporate avatars and agents in the same environment illustrating how \'real\' participants can influence simulation outcomes. Such examples demonstrate the potential for creating highly visual, immersive, interactive agent-based models for social scientists in multi-user real time virtual worlds. The paper concludes with some final comments on problems with representing models in current virtual worlds and future avenues of research.Agent-Based Modelling, Pedestrian Evacuation, Segregation, Virtual Worlds, Second Life

    Encoding natural movement as an agent-based system: an investigation into human pedestrian behaviour in the built environment

    Get PDF
    Gibson's ecological theory of perception has received considerable attention within psychology literature, as well as in computer vision and robotics. However, few have applied Gibson's approach to agent-based models of human movement, because the ecological theory requires that individuals have a vision-based mental model of the world, and for large numbers of agents this becomes extremely expensive computationally. Thus, within current pedestrian models, path evaluation is based on calibration from observed data or on sophisticated but deterministic route-choice mechanisms; there is little open-ended behavioural modelling of human-movement patterns. One solution which allows individuals rapid concurrent access to the visual information within an environment is an 'exosomatic visual architecture" where the connections between mutually visible locations within a configuration are prestored in a lookup table. Here we demonstrate that, with the aid of an exosomatic visual architecture, it is possible to develop behavioural models in which movement rules originating from Gibson's principle of affordance are utilised. We apply large numbers of agents programmed with these rules to a built-environment example and show that, by varying parameters such as destination selection, field of view, and steps taken between decision points, it is possible to generate aggregate movement levels very similar to those found in an actual building context

    Natural Walking in Virtual Reality:A Review

    Get PDF

    From presence to consciousness through virtual reality

    Get PDF
    Immersive virtual environments can break the deep, everyday connection between where our senses tell us we are and where we are actually located and whom we are with. The concept of 'presence' refers to the phenomenon of behaving and feeling as if we are in the virtual world created by computer displays. In this article, we argue that presence is worthy of study by neuroscientists, and that it might aid the study of perception and consciousness

    Appearance-based localization for mobile robots using digital zoom and visual compass

    Get PDF
    This paper describes a localization system for mobile robots moving in dynamic indoor environments, which uses probabilistic integration of visual appearance and odometry information. The approach is based on a novel image matching algorithm for appearance-based place recognition that integrates digital zooming, to extend the area of application, and a visual compass. Ambiguous information used for recognizing places is resolved with multiple hypothesis tracking and a selection procedure inspired by Markov localization. This enables the system to deal with perceptual aliasing or absence of reliable sensor data. It has been implemented on a robot operating in an office scenario and the robustness of the approach demonstrated experimentally

    Locomotion in virtual reality in full space environments

    Get PDF
    Virtual Reality is a technology that allows the user to explore and interact with a virtual environment in real time as if they were there. It is used in various fields such as entertainment, education, and medicine due to its immersion and ability to represent reality. Still, there are problems such as virtual simulation sickness and lack of realism that make this technology less appealing. Locomotion in virtual environments is one of the main factors responsible for an immersive and enjoyable virtual reality experience. Several methods of locomotion have been proposed, however, these have flaws that end up negatively influencing the experience. This study compares natural locomotion in complete spaces with joystick locomotion and natural locomotion in impossible spaces through three tests in order to identify the best locomotion method in terms of immersion, realism, usability, spatial knowledge acquisition and level of virtual simulation sickness. The results show that natural locomotion is the method that most positively influences the experience when compared to the other locomotion methods.A Realidade Virual é uma tecnologia que permite ao utilizador explorar e interagir com um ambiente virtual em tempo real como se lá estivesse presente. E utilizada em diversas áreas como o entretenimento, educação e medicina devido à sua imersão e capacidade de representar a realidade. Ainda assim, existem problemas como o enjoo por simulação virtual e a falta de realismo que tornam esta tecnologia menos apelativa. A locomoção em ambientes virtuais é um dos principais fatores responsáveis por uma experiência em realidade virtual imersiva e agradável. Vários métodos de locomoção foram propostos, no entanto, estes têm falhas que acabam por influenciar negativamente a experiência. Este estudo compara a locomoção natural em espaços completos com a locomoção por joystick e a locomoção natural em espaços impossíveis através de três testes de forma a identificar qual o melhor método de locomoção a nível de imersão, realismo, usabilidade, aquisição de conhecimento espacial e nível de enjoo por simulação virtual. Os resultados mostram que a locomoção natural é o método que mais influencia positivamente a experiência quando comparado com os outros métodos de locomoção
    corecore