20 research outputs found
Recommended from our members
Cue combination for 3D location judgements
Cue combination rules have often been applied to the perception of surface shape but not to judgements of object location. Here, we used immersive virtual reality to explore the relationship between different cues to distance. Participants viewed a virtual scene and judged the change in distance of an object presented in two intervals, where the scene changed in size between intervals (by a factor of between 0.25 and 4). We measured thresholds for detecting a change in object distance when there were only 'physical' (stereo and motion parallax) or 'texture-based' cues (independent of the scale of the scene) and used these to predict biases in a distance matching task. Under a range of conditions, in which the viewing distance and position of the tarte relative to other objects was varied, the ration of 'physical' to 'texture-based' thresholds was a good predictor of biases in the distance matching task. The cue combination approach, which successfully accounts for our data, relies on quite different principles from those underlying geometric reconstruction
Measuring Pedestrians’ Gap Acceptance When Interacting with Vehicles - A Human Gait Oriented Approach
A significant variable describing the pedestrians’ behavior when interacting
with vehicles is gap acceptance, which is the pedestrians’ choice of temporal
and spatial gaps when crossing in front of vehicles. After a review of relevant
approaches to measure gap acceptance used in studies, this paper presents a novel
approach, which is suitable for the usage in field experiments and allows a natural
crossing behavior of subjects. In particular, following a detailed analysis of
forces exerted during human gait, an algorithm was developed that is capable
of identifying the accurate temporal point at which subjects start crossing as the
basis for calculating gap acceptance. Pretest results show the system’s stability
and reliability as well as the gait algorithm’s robustness in determining the correct
gap acceptance value. The human gait oriented approach can serve as a basis for
designing interaction processes between pedestrians and automated vehicles that
are a focus of current research efforts
Assessing Distance Perception In Virtual And Augmented Realities With Electroencephalography
A comfinding in spatial perception research is that subjects tend to underestimate distances in virtual reality compared to the real world. The degree and methods of measurement of underestimation vary between studies, but the trend of underestimation is consistent. This study uses electroencephalography as a neuroimaging technique to examine patterns of brain activity when fixating objects in near space and far space in the real world, in virtual reality, and in augmented reality. For the augmented reality condition, a custom optical see-through augmented reality head-mounted display (HMD) was built and calibrated. A calibration method was developed to correct the geometric distortion introduced by the HMD\u27s optical combiners. This method also calibrates a motion tracker mounted on the HMD to allow for tracking of head movements
Taking Immersive VR Leap in Training of Landing Signal Officers
The article of record as published may be found at http://dx.doi.org/10.1109/TVCG.2016.2518098A major training device used to train all Landing Signal Officers (LSOs) for several decades has been the Landing Signal
Officer Trainer, Device 2H111. This simulator, located in Oceana, VA, is contained within a two story tall room; it consists of several
large screens and a physical rendition of the actual instruments used by LSOs in their operational environment. The young officers
who serve in this specialty will typically encounter this system for only a short period of formal instruction (six one-hour long
sessions), leaving multiple gaps in training. While experience with 2H111 is extremely valuable for all LSO officers, the amount of
time they can spend using this training device is undeniably too short. The need to provide LSOs with an unlimited number of
training opportunities unrestricted by location and time, married with recent advancements in commercial off the shelf (COTS)
immersive technologies, provided an ideal platform to create a lightweight training solution that would fill those gaps and extend
beyond the capabilities currently offered in the 2H111 simulator. This paper details our efforts on task analysis, surveying of user
domain, mapping of 2H111 training capabilities to new prototype system to ensure its support of major training objectives of 2H111,
design and development of prototype training system, and a feasibility study that included tests of technical system performance
and informal testing with trainees at the LSO Schoolhouse. The results achieved in this effort indicate that the time for LSO training
to make the leap to immersive VR has decidedly come
Quantifying effects of exposure to the third and first-person perspectives in virtual-reality-based training
In the recent years, usage of the third-person perspective (3PP) in virtual training methods has become increasingly viable and despite the growing interest in virtual reality and graphics underlying third-person perspective usage, not many studies have systematically looked at the dynamics and differences between the third and first-person perspectives (1PPs). The current study was designed to quantify the differences between the effects induced by training participants to the third-person and first-person perspectives in a ball catching task. Our results show that for a certain trajectory of the stimulus, the performance of the participants post3PP training is similar to their performance postnormal perspective training. Performance post1PP training varies significantly from both 3PP and the normal perspectiv
Simulation Of Virtual Reality Display Characteristics: A Method For The Evaluation Of Motion Perception
Visual perception in virtual reality devices is a widely researched topic. Many newer experiments compare their results to those of older studies that may have used equipment which is now outdated, which can cause perceptual differences. These differences in hardware can be simulated to a degree in software, provided the capabilities of the current hardware meet or exceed those of the older hardware. I present the HMD Simulation Framework, a software package for the Unity3D engine that allows for quick modification of many commonly researched HMD characteristics through the Inspector GUI built into Unity. I also describe a human subjects experiment aimed at identifying perceptual equivalence classes between different sets of headset characteristics. Unfortunately, due to the COVID-19 pandemic, all human subjects research was suspended for safety reasons, and I was unable to collect any data
Recommended from our members
Perception of perspective in augmented reality head-up displays
Augmented Reality (AR) is emerging fast with a wide range of applications, including automotive AR Head-Up Displays (AR HUD). As a result, there is a growing need to understand human perception of depth in AR. Here, we discuss two user studies on depth perception, in particular the perspective cue. The fi rst experiment compares the perception of the perspective depth cue (1) in the physical world, (2) on a at-screen, and (3) on an AR HUD. Our AR HUD setup provided a two-dimensional vertically oriented virtual image projected at a fi xed distance. In each setting, participants were asked to estimate the size of a perspective angle. We found that the perception of angle sizes on AR HUD differs from perception in the physical world, but not from a at-screen. The
underestimation of the physical world's angle size compared to the AR HUD and screen setup might explain the egocentric depth underestimation phenomenon in virtual environments. In the second experiment, we compared perception for different graphical representations of angles that are relevant for practical
applications. Graphical alterations of angles displayed on a screen resulted in more variation between individuals' angle size estimations. Furthermore, the majority of the participants tends to underestimate the observed angle size in most conditions. Our results suggest that perspective angles on a vertically oriented fixed-depth AR HUD display mimics more accurately the perception of a screen, rather than the perception of the 3D environment. On-screen graphical alteration does not help to improve the underestimation in the majority of cases