634 research outputs found
Empirical Comparisons of Virtual Environment Displays
There are many different visual display devices used in virtual environment (VE) systems. These displays vary along many dimensions, such as resolution, field of view, level of immersion, quality of stereo, and so on. In general, no guidelines exist to choose an appropriate display for a particular VE application. Our goal in this work is to develop such guidelines on the basis of empirical results. We present two initial experiments comparing head-mounted displays with a workbench display and a foursided spatially immersive display. The results indicate that the physical characteristics of the displays, users' prior experiences, and even the order in which the displays are presented can have significant effects on performance
Effects of Active Exploration and Passive Observation on Spatial Learning in a CAVE
This experiment was a modification of Paul N. Wilson's 1999 study entitled "Active
Exploration of a Virtual Environment Does Not Promote Orientation or Memory for Objects." It
was hoped that changing the immersion level from a standard desktop monitor to a more
immersive CAVE environment would change the results of this experiment.
All subjects explored a three-dimensional virtual environment in a CAVE. Active
subjects were given controls to choose their own path and explore the environment. Passive
subjects watched a playback tour through the virtual environment. A unique active subject
determined the tour for each passive subject. Each subject was asked to remember the objects
they saw, their locations, and the floor plan of the environment.
Afterward, subjects were asked to indicate the direction to another location that was not
visible from the current location. Other object memory tests required recalling the location of
each object and indicating it on a plan view of the environment. Similar to Wilson's experiment,
this experiment yielded no significant indication that active exploration or passive observation
changes the level of spatial learning
SARSCEST (human factors)
People interact with the processes and products of contemporary technology. Individuals are affected by these in various ways and individuals shape them. Such interactions come under the label 'human factors'. To expand the understanding of those to whom the term is relatively unfamiliar, its domain includes both an applied science and applications of knowledge. It means both research and development, with implications of research both for basic science and for development. It encompasses not only design and testing but also training and personnel requirements, even though some unwisely try to split these apart both by name and institutionally. The territory includes more than performance at work, though concentration on that aspect, epitomized in the derivation of the term ergonomics, has overshadowed human factors interest in interactions between technology and the home, health, safety, consumers, children and later life, the handicapped, sports and recreation education, and travel. Two aspects of technology considered most significant for work performance, systems and automation, and several approaches to these, are discussed
The Role of Contextual Info‐Marks in Navigating a Virtual Rural Environment
Navigation is a task performed in both large and small scale environments. Landmarks within an environment are of great benefit to these navigational tasks, but in large rural environments such landmarks may be sparse. It has been shown that landmarks need not be purely visual and that a change in context for a feature can make it become a landmark against its surroundings (such as being provided with significant meaning). Such meaning could be added through personal experience or by informing the observer via some form of communication. To investigate the effects of providing such contextual information on navigational performance, experiments were conducted in a large rural virtual environment where the delivery method of the information was varied between onscreen and PDA display. Users were instructed to perform a route tracing navigation task. In some instances users were presented with textual information about specific locations within the environment which appeared when they were in the vicinity of the location. Both quantitative and qualitative data were collected and analyzed, with results indicating that although the actual performance in the task was not significantly improved, users felt that their performance was better and the task easier when they were presented with the contextual information
Augmenting the Spatial Perception Capabilities of Users Who Are Blind
People who are blind face a series of challenges and limitations resulting from their lack of being able to see, forcing them to either seek the assistance of a sighted individual or work around the challenge by way of a inefficient adaptation (e.g. following the walls in a room in order to reach a door rather than walking in a straight line to the door). These challenges are directly related to blind users' lack of the spatial perception capabilities normally provided by the human vision system. In order to overcome these spatial perception related challenges, modern technologies can be used to convey spatial perception data through sensory substitution interfaces. This work is the culmination of several projects which address varying spatial perception problems for blind users. First we consider the development of non-visual natural user interfaces for interacting with large displays. This work explores the haptic interaction space in order to find useful and efficient haptic encodings for the spatial layout of items on large displays. Multiple interaction techniques are presented which build on prior research (Folmer et al. 2012), and the efficiency and usability of the most efficient of these encodings is evaluated with blind children. Next we evaluate the use of wearable technology in aiding navigation of blind individuals through large open spaces lacking tactile landmarks used during traditional white cane navigation. We explore the design of a computer vision application with an unobtrusive aural interface to minimize veering of the user while crossing a large open space. Together, these projects represent an exploration into the use of modern technology in augmenting the spatial perception capabilities of blind users
Automatic Speed Control For Navigation in 3D Virtual Environment
As technology progresses, the scale and complexity of 3D virtual environments can also increase proportionally. This leads to multiscale virtual environments, which are environments that contain groups of objects with extremely unequal levels of scale. Ideally the user should be able to navigate such environments efficiently and robustly. Yet, most previous methods to automatically control the speed of navigation do not generalize well to environments with widely varying scales. I present an improved method to automatically control the navigation speed of the user in 3D virtual environments. The main benefit of my approach is that automatically adapts the navigation speed in multi-scale environments in a manner that enables efficient navigation with maximum freedom, while still avoiding collisions. The results of a usability tests show a significant reduction in the completion time for a multi-scale navigation task
The Effectiveness of Augmented Reality as a Facilitator of Information Acquisition in Aviation Maintenance Applications
Until recently, in the field of Augmented Reality (AR) little research attention has been paid to the cognitive benefits of this emerging technology. AR, the synthesis of computer images and text in the real world, affords a supplement to normal information acquisition that has yet to be fully explored and exploited. AR achieves a more smooth and seamless interface by complementing human cognitive networks, and aiding information integration through multimodal sensory elaboration (visual, verbal, proprioceptive, and tactile memory) while the user is performing real world tasks. AR also incorporates visuo-spatial ability, which involves the representations of spatial information in memory. The use of this type of information is an extremely powerful form of elaboration. This study examined four learning paradigms: print (printed material) mode, observe (video tape) mode, interact (text annotations activated by mouse interaction) mode, and select (AR) mode. The results of the experiment indicated that the select (AR) mode resulted in better learning and recall when compared to the other three conventional learning modes
Testing Navigation in Real Space: Contributions to Understanding the Physiology and Pathology of Human Navigation Control
Successful navigation relies on the flexible and appropriate use of metric representations of space or topological knowledge of the environment. Spatial dimensions (2D vs. 3D), spatial scales (vista-scale vs. large-scale environments) and the abundance of visual landmarks critically affect navigation performance and behavior in healthy human subjects. Virtual reality (VR)-based navigation paradigms in stationary position have given insight into the major navigational strategies, namely egocentric (body-centered) and allocentric (world-centered), and the cerebral control of navigation. However, VR approaches are biased towards optic flow and visual landmark processing. This major limitation can be overcome to some extent by increasingly immersive and realistic VR set-ups (including large-screen projections, eye tracking and use of head-mounted camera systems). However, the highly immersive VR settings are difficult to apply particularly to older subjects and patients with neurological disorders because of cybersickness and difficulties with learning and conducting the tasks. Therefore, a need for the development of novel spatial tasks in real space exists, which allows a synchronous analysis of navigational behavior, strategy, visual explorations and navigation-induced brain activation patterns. This review summarizes recent findings from real space navigation studies in healthy subjects and patients with different cognitive and sensory neurological disorders. Advantages and limitations of real space navigation testing and different VR-based navigation paradigms are discussed in view of potential future applications in clinical neurology
EXPLORING THE ABILITY TO EMPLOY VIRTUAL 3D ENTITIES OUTDOORS AT RANGES BEYOND 20 METERS
The Army is procuring the Integrated Visual Augmentation System (IVAS) system to enable enhanced night vision, planning, and training capability. One known limitation of the IVAS system is the limited ability to portray virtual entities at far ranges in the outdoors due to light wash out, accurate positioning, and dynamic occlusion. The primary goal of this research was to evaluate fixed three-dimensional (3D) visualizations to support outdoor training for fire teams through squads, requiring target visualizations for 3D non-player characters or vehicles at ranges up to 300 m. Tools employed to achieve outdoor visualizations included GPS locational data with virtual entity placement, and sensors to adjust device light levels. This study was conducted with 20 military test subjects in three scenarios at the Naval Postgraduate School using a HoloLens II. Outdoor location considerations included shadows, background clutter, cars blocking the field of view, and the sun’s positioning. Users provided feedback on identifying the type of object, and the difficulty in finding the object. The results indicate GPS only aided in identification for objects up to 100 m. Animation had a statistically insignificant effect on identification of objects. Employment of software to adjust the light levels of the virtual objects aided in identification of objects at 200 m. This research develops a clearer understanding of requirements to enable the employment of mixed reality in outdoor training.Lieutenant Colonel, United States ArmyApproved for public release. Distribution is unlimited
“Low road” to rehabilitation: A perspective on subliminal sensory neuroprosthetics
Fear can propagate parallelly through both cortical and subcortical pathways. It can instigate memory consolidation habitually and might allow internal simulation of movements independent of the cortical structures. This perspective suggests delivery of subliminal, aversive and kinematic audiovisual stimuli via neuroprosthetics in patients with neocortical dysfunctions. We suggest possible scenarios by which these stimuli might bypass damaged neocortical structures and possibly assisting in motor relearning. Anticipated neurophysiological mechanisms and methodological scenarios have been discussed in this perspective. This approach introduces novel perspectives into neuropsychology as to how subcortical pathways might be used to induce motor relearning. © 2018 Ghai et al
- …