1,550 research outputs found

    Eye Pointing in Stereoscopic Displays

    Get PDF
    This study investigated eye pointing in stereoscopic displays. Ten participants performed 18 tapping tasks in stereoscopic displays with three different levels of parallax (at the screen, 20 cm and 50 cm in front of the screen). The results showed that parallax had significant effects on hand movement time, eye movement time, index of performance in hand click and eye gaze. The movement time was shorter and the performance was better when the target was at the screen, compared to the conditions when the targets were seen at 20 cm and 50 cm in front of the screen. Furthermore, the findings of this study supports that the eye movement in stereoscopic displays follows the Fitts’ law. The proposed algorithm was effective on the eye gaze selection to improve the good fit of eye movement in stereoscopic displays

    Evaluating 3D pointing techniques

    Get PDF
    "This dissertation investigates various issues related to the empirical evaluation of 3D pointing interfaces. In this context, the term ""3D pointing"" is appropriated from analogous 2D pointing literature to refer to 3D point selection tasks, i.e., specifying a target in three-dimensional space. Such pointing interfaces are required for interaction with virtual 3D environments, e.g., in computer games and virtual reality. Researchers have developed and empirically evaluated many such techniques. Yet, several technical issues and human factors complicate evaluation. Moreover, results tend not to be directly comparable between experiments, as these experiments usually use different methodologies and measures. Based on well-established methods for comparing 2D pointing interfaces this dissertation investigates different aspects of 3D pointing. The main objective of this work is to establish methods for the direct and fair comparisons between 2D and 3D pointing interfaces. This dissertation proposes and then validates an experimental paradigm for evaluating 3D interaction techniques that rely on pointing. It also investigates some technical considerations such as latency and device noise. Results show that the mouse outperforms (between 10% and 60%) other 3D input techniques in all tested conditions. Moreover, a monoscopic cursor tends to perform better than a stereo cursor when using stereo display, by as much as 30% for deep targets. Results suggest that common 3D pointing techniques are best modelled by first projecting target parameters (i.e., distance and size) to the screen plane.

    Effects of Stereoscopic 3D Digital Radar Displays on Air Traffic Controller Performance

    Get PDF
    Air traffic controllers are responsible for directing air traffic based upon decisions made from traffic activity depicted on 2Dimensional (2D) radar displays. Controllers must identify aircraft and detect potential conflicts while simultaneously developing and executing plans of action to ensure safe separation is maintained. With a nearly 100% increase in traffic expected within the next decade (FAA, 2012a), controllers\u27 abilities to rapidly interpret spacing and maintain awareness for longer durations with increased workload will become increasingly imperative to safety. The current display design spatially depicts an aircraft\u27s position relative to the controller\u27s airspace as well as speed, altitude, and direction in textual form which requires deciphering and arithmetic to determine vertical separation. Since vertical separation is as imperative to flight safety as lateral separation, affording the controller an intuitive design for determining spacing without mental model creation is critical to reducing controller workload, and increasing awareness and efficiency. To examine this potential, a stereoscopic radar workstation simulator was developed and field-tested with 35 USAF controllers. It presented a view similar to traditional radar displays, (i.e. top-down), however, it depicted altitude through the use of 3D stereoscopic disparity, permitting vertical separation to be visually represented

    Stereoscopic 3D Technologies for Accurate Depth Tasks: A Theoretical and Empirical Study

    Get PDF
    In the last decade an increasing number of application fields, including medicine, geoscience and bio-chemistry, have expressed a need to visualise and interact with data that are inherently three-dimensional. Stereoscopic 3D technologies can offer a valid support for these operations thanks to the enhanced depth representation they can provide. However, there is still little understanding of how such technologies can be used effectively to support the performance of visual tasks based on accurate depth judgements. Existing studies do not provide a sound and complete explanation of the impact of different visual and technical factors on depth perception in stereoscopic 3D environments. This thesis presents a new interpretative and contextualised analysis of the vision science literature to clarify the role of di®erent visual cues on human depth perception in such environments. The analysis identifies luminance contrast, spatial frequency, colour, blur, transparency and depth constancies as influential visual factors for depth perception and provides the theoretical foundation for guidelines to support the performance of accurate stereoscopic depth tasks. A novel assessment framework is proposed and used to conduct an empirical study to evaluate the performance of four distinct classes of 3D display technologies. The results suggest that 3D displays are not interchangeable and that the depth representation provided can vary even between displays belonging to the same class. The study also shows that interleaved displays may suffer from a number of aliasing artifacts, which in turn may affect the amount of perceived depth. The outcomes of the analysis of the influential visual factors for depth perception and the empirical comparartive study are used to propose a novel universal 3D cursor prototype suitable to support depth-based tasks in stereoscopic 3D environments. The contribution includes a number of both qualitative and quantitative guidelines that aim to guarantee a correct perception of depth in stereoscopic 3D environments and that should be observed when designing a stereoscopic 3D cursor

    Stereoscopic bimanual interaction for 3D visualization

    Get PDF
    Virtual Environments (VE) are being widely used in various research fields for several decades such as 3D visualization, education, training and games. VEs have the potential to enhance the visualization and act as a general medium for human-computer interaction (HCI). However, limited research has evaluated virtual reality (VR) display technologies, monocular and binocular depth cues, for human depth perception of volumetric (non-polygonal) datasets. In addition, a lack of standardization of three-dimensional (3D) user interfaces (UI) makes it challenging to interact with many VE systems. To address these issues, this dissertation focuses on evaluation of effects of stereoscopic and head-coupled displays on depth judgment of volumetric dataset. It also focuses on evaluation of a two-handed view manipulation techniques which support simultaneous 7 degree-of-freedom (DOF) navigation (x,y,z + yaw,pitch,roll + scale) in a multi-scale virtual environment (MSVE). Furthermore, this dissertation evaluates auto-adjustment of stereo view parameters techniques for stereoscopic fusion problems in a MSVE. Next, this dissertation presents a bimanual, hybrid user interface which combines traditional tracking devices with computer-vision based "natural" 3D inputs for multi-dimensional visualization in a semi-immersive desktop VR system. In conclusion, this dissertation provides a guideline for research design for evaluating UI and interaction techniques

    Recognizing AR-guided manual tasks through autonomic nervous system correlates: A preliminary study

    Get PDF
    Optical see-through head-mounted displays (HMD) enable optical superposition of computer-generated virtual data onto the user's natural view of the real environment. This makes them the most suitable candidate to guide manual tasks, as for augmented reality (AR) guided surgery. However, most commercial systems have a single focal plane at around 2-3 m inducing 'vergence-accommodation conflict' and 'focal rivalry' when used to guide manual tasks. These phenomena can often cause visual fatigue and low performance. In this preliminary study, ten subjects performed a precision manual task in two conditions: with or without using the AR HMD. We demonstrated a significant deterioration of the performance using the AR-guided manual task. Moreover, we investigated the autonomic nervous system response through the analysis of the heart rate variability (HRV) and electrodermal activity (EDA) signals. We developed a pattern recognition system that was able to automatically recognize the two experimental conditions using only EDA and HRV data with an accuracy of 75%. Our learning algorithm highlighted two different physiological patterns combining parasympathetic and sympathetic informati

    GiAnt: stereoscopic-compliant multi-scale navigation in VEs

    Get PDF
    International audienceNavigation in multi-scale virtual environments (MSVE) requires the adjustment of the navigation parameters to ensure optimal navigation experiences at each level of scale. In particular, in immersive stereoscopic systems, e.g. when performing zoom-in and zoom-out operations, the navigation speed and the stereoscopic rendering parameters have to be adjusted accordingly. Although this adjustment can be done manually by the user, it can be complex, tedious and strongly depends on the virtual environment. In this work we propose a new multi-scale navigation technique named GiAnt (GIant/ANT) which automatically and seamlessly adjusts the navigation speed and the scale factor of the virtual environment based on the user's perceived navigation speed. The adjustment ensures an almost-constant perceived navigation speed while avoiding diplopia effects or diminished depth perception due to improper stereoscopic rendering configurations. The results from the conducted user evaluation shows that GiAnt is an efficient multi-scale navigation which minimizes the changes of the scale factor of the virtual environment compared to state-of-the-art multi-scale navigation techniques

    An Arm-Mounted Accelerometer and Gyro-Based 3D Control System

    Get PDF
    This thesis examines the performance of a wearable accelerometer/gyroscope-based system for capturing arm motions in 3D. Two experiments conforming to ISO 9241-9 specifications for non-keyboard input devices were performed. The first, modeled after the Fitts' law paradigm described in ISO 9241-9, utilized the wearable system to control a telemanipulator compared with joystick control and the user's arm. The throughputs were 5.54 bits/s, 0.74 bits/s and 0.80 bits/s, respectively. The second experiment utilized the wearable system to control a cursor in a 3D fish-tank virtual reality setup. The participants performed a 3D Fitts' law task with three selection methods: button clicks, dwell, and a twist gesture. Error rates were 6.82 %, 0.00% and 3.59 % respectively. Throughput ranged from 0.8 to 1.0 bits/s. The thesis includes detailed analyses on lag and other issues that present user interface challenges for systems that employ human-mounted sensor inputs to control a telemanipulator apparatus

    Visual Attention in Virtual Reality:(Alternative Format Thesis)

    Get PDF
    corecore