503 research outputs found

    Visual discomfort whilst viewing 3D stereoscopic stimuli

    Get PDF
    3D stereoscopic technology intensifies and heightens the viewer s experience by adding an extra dimension to the viewing of visual content. However, with expansion of this technology to the commercial market concerns have been expressed about the potential negative effects on the visual system, producing viewer discomfort. The visual stimulus provided by a 3D stereoscopic display differs from that of the real world, and so it is important to understand whether these differences may pose a health hazard. The aim of this thesis is to investigate the effect of 3D stereoscopic stimulation on visual discomfort. To that end, four experimental studies were conducted. In the first study two hypotheses were tested. The first hypothesis was that the viewing of 3D stereoscopic stimuli, which are located geometrically beyond the screen on which the images are displayed, would induce adaptation changes in the resting position of the eyes (exophoric heterophoria changes). The second hypothesis was that participants whose heterophoria changed as a consequence of adaptation during the viewing of the stereoscopic stimuli would experience less visual discomfort than those people whose heterophoria did not adapt. In the experiment an increase of visual discomfort change in the 3D condition in comparison with the 2D condition was found. Also, there were statistically significant changes in heterophoria under 3D conditions as compared with 2D conditions. However, there was appreciable variability in the magnitude of this adaptation among individuals, and no correlation between the amount of heterophoria change and visual discomfort change was observed. In the second experiment the two hypotheses tested were based on the vergence-accommodation mismatch theory, and the visual-vestibular mismatch theory. The vergence-accommodation mismatch theory predicts that a greater mismatch between the stimuli to accommodation and to vergence would produce greater symptoms in visual discomfort when viewing in 3D conditions than when viewing in 2D conditions. An increase of visual discomfort change in the 3D condition in comparison with the 2D condition was indeed found; however the magnitude of visual discomfort reported did not correlate with the mismatch present during the watching of 3D stereoscopic stimuli. The visual-vestibular mismatch theory predicts that viewing a stimulus stereoscopically will produce a greater sense of vection than viewing it in 2D. This will increase the conflict between the signals from the visual and vestibular systems, producing greater VIMS (Visually- Induced Motion Sickness) symptoms. Participants did indeed report an increase in motion sickness symptoms in the 3D condition. Furthermore, participants with closer seating positions reported more VIMS than participants sitting farther away whilst viewing 3D stimuli. This suggests that the amount of visual field stimulated during 3D presentation affects VIMS, and is an important factor in terms of viewing comfort. In the study more younger viewers (21 to 39 years old) than older viewers (40 years old and older) reported a greater change in visual discomfort during the 3D condition than the 2D condition. This suggests that the visual system s response to a stimulus, rather than the stimulus itself, is a reason for discomfort. No influence of gender on viewing comfort was found. In the next experiment participants fusion capability, as measured by their fusional reserves, was examined to determine whether this component has an impact on reported discomfort during the watching of movies in the 3D condition versus the 2D condition. It was hypothesised that participants with limited fusional range would experience more visual discomfort than participants with a wide fusion range. The hypothesis was confirmed but only in the case of convergent and not divergent eye movement. This observation illustrates that participants capability to convergence has a significant impact on visual comfort. The aim of the last experiment was to examine responses of the accommodation system to changes in 3D stimulus position and to determine whether discrepancies in these responses (i.e. accommodation overshoot, accommodation undershoot) could account for visual discomfort experienced during 3D stereoscopic viewing. It was found that accommodation discrepancy was larger for perceived forwards movement than for perceived backwards movement. The discrepancy was slightly higher in the group susceptible to visual discomfort than in the group not susceptible to visual discomfort, but this difference was not statistically significant. When considering the research findings as a whole it was apparent that not all participants experienced more discomfort whilst watching 3D stereoscopic stimuli than whilst watching 2D stimuli. More visual discomfort in the 3D condition than in the 2D condition was reported by 35% of the participants, whilst 24% of the participants reported more headaches and 17% of the participants reported more VIMS. The research indicates that multiple causative factors have an impact on reported symptoms. The analysis of the data suggests that discomfort experienced by people during 3D stereoscopic stimulation may reveal binocular vision problems. This observation suggests that 3D technology could be used as a screening method to diagnose un-treated binocular vision disorder. Additionally, this work shows that 3D stereoscopic technology can be easily adopted to binocular vision measurement. The conclusion of this thesis is that many people do not suffer adverse symptoms when viewing 3D stereoscopic displays, but that if adverse symptoms are present they can be caused either by the conflict in the stimulus, or by the heightened experience of self-motion which leads to Visually-Induced Motion Sickness (VIMS)

    Optometric Measurements Predict Performance but Not Comfort on a Virtual Object Placement Task With a Stereoscopic 3D Display

    Get PDF
    Twelve participants were tested on a simple virtual object precision placement task while viewing a stereoscopic 3D (S3D) display. Inclusion criteria included uncorrected or best corrected vision of 20/20 or better in each eye and stereopsis of at least 40 arc sec using the Titmus stereo test. Additionally, binocular function was assessed, including measurements of distant and near phoria (horizontal and vertical) and distant and near horizontal fusion ranges using standard optometric clinical techniques. Before each of six 30 minute experimental sessions, measurements of phoria and fusion ranges were repeated using a Keystone View Telebinocular and an S3D display, respectively. All participants completed experimental sessions in which the task required the precision placement of a virtual object in depth at the same location as a target object. Subjective discomfort was assessed using the Simulator Sickness Questionnaire (SSQ). Individual placement accuracy in S3D trials was significantly correlated with several of the binocular screening outcomes: viewers with larger convergent fusion ranges (measured at near distance), larger total fusion ranges (convergent plus divergent ranges, measured at near distance), and/or lower (better) stereoscopic acuity thresholds were more accurate on the placement task. No screening measures were predictive of subjective discomfort, perhaps due to the low levels of discomfort induced

    GazeStereo3D: seamless disparity manipulations

    Get PDF
    Producing a high quality stereoscopic impression on current displays is a challenging task. The content has to be carefully prepared in order to maintain visual comfort, which typically affects the quality of depth reproduction. In this work, we show that this problem can be significantly alleviated when the eye fixation regions can be roughly estimated. We propose a new method for stereoscopic depth adjustment that utilizes eye tracking or other gaze prediction information. The key idea that distinguishes our approach from the previous work is to apply gradual depth adjustments at the eye fixation stage, so that they remain unnoticeable. To this end, we measure the limits imposed on the speed of disparity changes in various depth adjustment scenarios, and formulate a new model that can guide such seamless stereoscopic content processing. Based on this model, we propose a real-time controller that applies local manipulations to stereoscopic content to find the optimum between depth reproduction and visual comfort. We show that the controller is mostly immune to the limitations of low-cost eye tracking solutions. We also demonstrate benefits of our model in off-line applications, such as stereoscopic movie production, where skillful directors can reliably guide and predict viewers' attention or where attended image regions are identified during eye tracking sessions. We validate both our model and the controller in a series of user experiments. They show significant improvements in depth perception without sacrificing the visual quality when our techniques are applied

    A Neurophysiologic Study Of Visual Fatigue In Stereoscopic Related Displays

    Get PDF
    Two tasks were investigated in this study. The first study investigated the effects of alignment display errors on visual fatigue. The experiment revealed the following conclusive results: First, EEG data suggested the possibility of cognitively-induced time compensation changes due to a corresponding effect in real-time brain activity by the eyes trying to compensate for the alignment. The magnification difference error showed more significant effects on all EEG band waves, which were indications of likely visual fatigue as shown by the prevalence of simulator sickness questionnaire (SSQ) increases across all task levels. Vertical shift errors were observed to be prevalent in theta and beta bands of EEG which probably induced alertness (in theta band) as a result of possible stress. Rotation errors were significant in the gamma band, implying the likelihood of cognitive decline because of theta band influence. Second, the hemodynamic responses revealed that significant differences exist between the left and right dorsolateral prefrontal due to alignment errors. There was also a significant difference between the main effect for power band hemisphere and the ATC task sessions. The analyses revealed that there were significant differences between the dorsal frontal lobes in task processing and interaction effects between the processing lobes and tasks processing. The second study investigated the effects of cognitive response variables on visual fatigue. Third, the physiologic indicator of pupil dilation was 0.95mm that occurred at a mean time of 38.1min, after which the pupil dilation begins to decrease. After the average saccade rest time of 33.71min, saccade speeds leaned toward a decrease as a possible result of fatigue on-set. Fourth, the neural network classifier showed visual response data from eye movement were identified as the best predictor of visual fatigue with a classification accuracy of 90.42%. Experimental data confirmed that 11.43% of the participants actually experienced visual fatigue symptoms after the prolonged task

    Dynamic horizontal image translation in stereo 3D

    Get PDF
    Im Bereich Stereo 3D (S3D) bezeichnet „Dynamic Horizontal Image Translation (DHIT)“ das Prinzip, die S3D-Ansichten einer Szene horizontal in entgegengesetzte Richtungen zu verschieben, wodurch die dargestellte Szene in der Tiefe verschoben wird. Dies wird vor allem im Kontext von „Active Depth Cuts“ eingesetzt. Hier werden die S3D-Ansichten vor und nach einem Szenenschnitt so verschoben, dass es nicht zu starken, störenden Tiefensprüngen kommt. Die menschliche Wahrnehmung der DHIT wurde experimentell untersucht. Eine der wichtigsten Erkenntnisse war, dass es starke individuelle Unterschiede in der Empfindlichkeit gegenüber der DHIT gibt. Daher wird empfohlen die Verschiebungsgeschwindigkeit einer S3D-Ansicht nicht höher als 0,10 °/s bis 0,12 °/s zu wählen, sodass Zuschauerinnen und Zuschauer nicht von der DHIT gestört werden. Bei der DHIT kommt es zu einer Verzerrung der dargestellten Szenentiefe. Dies wird bei dem vorgeschlagenen Ansatz „Distortion-Free Dynamic Horizontal Image Translation (DHIT+)“ kompensiert, indem der Abstand zwischen den S3D-Kameras durch Verfahren der Ansichtensynthese angepasst wird. Dieser Ansatz zeigte sich signifikant weniger störend im Vergleich zur DHIT. Die Ansichten konnten ohne Wahrnehmungsbeeinträchtigung etwa 50% schneller verschoben werden. Ein weiteres vorgeschlagenes Verfahren ist „Gaze Adaptive Convergence in Stereo 3D Applications (GACS3D)“. Unter Verwendung eines Eyetrackers wird die Disparität des geschätzten Blickpunkts langsam über die DHIT reduziert. Dies soll die Ermüdung des visuellen Systems mindern, da die Diskrepanz zwischen Akkommodation und Konvergenz reduziert wird. In einem Experiment mit emuliertem Eye-Tracking war GACS3D signifikant weniger störend als eine normale DHIT. Im Vergleich zwischen dem kompletten GACS3D-Prototypen und einer Bildsequenz ohne jegliche Verschiebungen konnte jedoch kein signifikanter Effekt auf den subjektiven Betrachterkomfort registriert werden. Eine Langzeituntersuchung der Ermüdung des visuellen Systems ist nötig, was über den Rahmen dieser Dissertation hinausgeht. Da für GACS3D eine hochgenaue Schätzung der Blickpunktdisparität benötigt wird, wurde die „Probabilistic Visual Focus Disparity Estimation“ entwickelt. Bei diesem Ansatz wird die 3D-Szenenstruktur in Echtzeit geschätzt und dazu verwendet, die Schätzung der Blickpunktdisparität deutlich zu verbessern.Dynamic horizontal image translation (DHIT) denotes the act of dynamically shifting the stereo 3D (S3D) views of a scene in opposite directions so that the portrayed scene is moved along the depth axis. This technique is predominantly used in the context of active depth cuts, where the shifting occurs just before and after a shot cut in order to mitigate depth discontinuities that would otherwise induce visual fatigue. The perception of the DHIT was investigated in an experiment. An important finding was that there are strong individual differences in the sensitivity towards DHIT. It is therefore recommended to keep the shift speed applied to each S3D view in the range of 0.10 °/s to 0.12 °/s so that nobody in the audience gets annoyed by this approach. When a DHIT is performed, the presented scene depth is distorted, i.e., compressed or stretched. A distortion-free dynamic horizontal image translation (DHIT+) is proposed that mitigates these distortions by adjusting the distance between the S3D cameras through depth-image-based rendering techniques. This approach proved to be significantly less annoying. The views could be shifted about 50% faster without perceptual side effects. Another proposed approach is called gaze adaptive convergence in stereo 3D applications (GACS3D). An eye tracker is used to estimate the visual focus whose disparity is then slowly reduced using the DHIT. This is supposed to lessen visual fatigue since the infamous accommodation vergence discrepancy is reduced. GACS3D with emulated eye tracking proved to be significantly less annoying than a regular DHIT. In a comparison between the complete prototype and a static horizontal image translation, no significant effect on subjective visual discomfort could be observed, however. A long-term evaluation of visual fatigue is necessary, which is beyond the scope of this work. In GACS3D, highly accurate visual focus disparity is required. Therefore, the probabilistic visual focus disparity estimation (PVFDE) was developed, which utilizes a real-time estimation of the 3D scene structure to improve the accuracy by orders of magnitude compared to commonly used approaches

    Remote Visual Observation of Real Places Through Virtual Reality Headsets

    Get PDF
    Virtual Reality has always represented a fascinating yet powerful opportunity that has attracted studies and technology developments, especially since the latest release on the market of powerful high-resolution and wide field-of-view VR headsets. While the great potential of such VR systems is common and accepted knowledge, issues remain related to how to design systems and setups capable of fully exploiting the latest hardware advances. The aim of the proposed research is to study and understand how to increase the perceived level of realism and sense of presence when remotely observing real places through VR headset displays. Hence, to produce a set of guidelines that give directions to system designers about how to optimize the display-camera setup to enhance performance, focusing on remote visual observation of real places. The outcome of this investigation represents unique knowledge that is believed to be very beneficial for better VR headset designs towards improved remote observation systems. To achieve the proposed goal, this thesis presents a thorough investigation of existing literature and previous researches, which is carried out systematically to identify the most important factors ruling realism, depth perception, comfort, and sense of presence in VR headset observation. Once identified, these factors are further discussed and assessed through a series of experiments and usability studies, based on a predefined set of research questions. More specifically, the role of familiarity with the observed place, the role of the environment characteristics shown to the viewer, and the role of the display used for the remote observation of the virtual environment are further investigated. To gain more insights, two usability studies are proposed with the aim of defining guidelines and best practices. The main outcomes from the two studies demonstrate that test users can experience an enhanced realistic observation when natural features, higher resolution displays, natural illumination, and high image contrast are used in Mobile VR. In terms of comfort, simple scene layouts and relaxing environments are considered ideal to reduce visual fatigue and eye strain. Furthermore, sense of presence increases when observed environments induce strong emotions, and depth perception improves in VR when several monocular cues such as lights and shadows are combined with binocular depth cues. Based on these results, this investigation then presents a focused evaluation on the outcomes and introduces an innovative eye-adapted High Dynamic Range (HDR) approach, which the author believes to be of great improvement in the context of remote observation when combined with eye-tracked VR headsets. Within this purpose, a third user study is proposed to compare static HDR and eye-adapted HDR observation in VR, to assess that the latter can improve realism, depth perception, sense of presence, and in certain cases even comfort. Results from this last study confirmed the author expectations, proving that eye-adapted HDR and eye tracking should be used to achieve best visual performances for remote observation in modern VR systems

    Engineering data compendium. Human perception and performance. User's guide

    Get PDF
    The concept underlying the Engineering Data Compendium was the product of a research and development program (Integrated Perceptual Information for Designers project) aimed at facilitating the application of basic research findings in human performance to the design and military crew systems. The principal objective was to develop a workable strategy for: (1) identifying and distilling information of potential value to system design from the existing research literature, and (2) presenting this technical information in a way that would aid its accessibility, interpretability, and applicability by systems designers. The present four volumes of the Engineering Data Compendium represent the first implementation of this strategy. This is the first volume, the User's Guide, containing a description of the program and instructions for its use
    corecore