237 research outputs found

    Stereo Viewing and Virtual Reality Technologies in Mobile Robot Teleguide

    Get PDF
    “This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder." “Copyright IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.” DOI: 10.1109/TRO.2009.2028765The use of 3-D stereoscopic visualization may provide a user with higher comprehension of remote environments in teleoperation when compared with 2-D viewing, in particular, a higher perception of environment depth characteristics, spatial localization, remote ambient layout, faster system learning, and decision performance. Works in the paper have demonstrated how stereo vision contributes to the improvement of the perception of some depth cues, often for abstract tasks, while it is hard to find works addressing stereoscopic visualization in mobile robot teleguide applications. This paper intends to contribute to this aspect by investigating the stereoscopic robot teleguide under different conditions, including typical navigation scenarios and the use of synthetic and real images. This paper also investigates how user performance may vary when employing different display technologies. Results from a set of test trials run on seven virtual reality systems, from laptop to large panorama and from head-mounted display to Cave automatic virtual environment (CAVE), emphasized few aspects that represent a base for further investigations as well as a guide when designing specific systems for telepresence.Peer reviewe

    Visual Perception and Cognition in Image-Guided Intervention

    Get PDF
    Surgical image visualization and interaction systems can dramatically affect the efficacy and efficiency of surgical training, planning, and interventions. This is even more profound in the case of minimally-invasive surgery where restricted access to the operative field in conjunction with limited field of view necessitate a visualization medium to provide patient-specific information at any given moment. Unfortunately, little research has been devoted to studying human factors associated with medical image displays and the need for a robust, intuitive visualization and interaction interfaces has remained largely unfulfilled to this day. Failure to engineer efficient medical solutions and design intuitive visualization interfaces is argued to be one of the major barriers to the meaningful transfer of innovative technology to the operating room. This thesis was, therefore, motivated by the need to study various cognitive and perceptual aspects of human factors in surgical image visualization systems, to increase the efficiency and effectiveness of medical interfaces, and ultimately to improve patient outcomes. To this end, we chose four different minimally-invasive interventions in the realm of surgical training, planning, training for planning, and navigation: The first chapter involves the use of stereoendoscopes to reduce morbidity in endoscopic third ventriculostomy. The results of this study suggest that, compared with conventional endoscopes, the detection of the basilar artery on the surface of the third ventricle can be facilitated with the use of stereoendoscopes, increasing the safety of targeting in third ventriculostomy procedures. In the second chapter, a contour enhancement technique is described to improve preoperative planning of arteriovenous malformation interventions. The proposed method, particularly when combined with stereopsis, is shown to increase the speed and accuracy of understanding the spatial relationship between vascular structures. In the third chapter, an augmented-reality system is proposed to facilitate the training of planning brain tumour resection. The results of our user study indicate that the proposed system improves subjects\u27 performance, particularly novices\u27, in formulating the optimal point of entry and surgical path independent of the sensorimotor tasks performed. In the last chapter, the role of fully-immersive simulation environments on the surgeons\u27 non-technical skills to perform vertebroplasty procedure is investigated. Our results suggest that while training surgeons may increase their technical skills, the introduction of crisis scenarios significantly disturbs the performance, emphasizing the need of realistic simulation environments as part of training curriculum

    Stereoscopic bimanual interaction for 3D visualization

    Get PDF
    Virtual Environments (VE) are being widely used in various research fields for several decades such as 3D visualization, education, training and games. VEs have the potential to enhance the visualization and act as a general medium for human-computer interaction (HCI). However, limited research has evaluated virtual reality (VR) display technologies, monocular and binocular depth cues, for human depth perception of volumetric (non-polygonal) datasets. In addition, a lack of standardization of three-dimensional (3D) user interfaces (UI) makes it challenging to interact with many VE systems. To address these issues, this dissertation focuses on evaluation of effects of stereoscopic and head-coupled displays on depth judgment of volumetric dataset. It also focuses on evaluation of a two-handed view manipulation techniques which support simultaneous 7 degree-of-freedom (DOF) navigation (x,y,z + yaw,pitch,roll + scale) in a multi-scale virtual environment (MSVE). Furthermore, this dissertation evaluates auto-adjustment of stereo view parameters techniques for stereoscopic fusion problems in a MSVE. Next, this dissertation presents a bimanual, hybrid user interface which combines traditional tracking devices with computer-vision based "natural" 3D inputs for multi-dimensional visualization in a semi-immersive desktop VR system. In conclusion, this dissertation provides a guideline for research design for evaluating UI and interaction techniques

    Remote Visual Observation of Real Places Through Virtual Reality Headsets

    Get PDF
    Virtual Reality has always represented a fascinating yet powerful opportunity that has attracted studies and technology developments, especially since the latest release on the market of powerful high-resolution and wide field-of-view VR headsets. While the great potential of such VR systems is common and accepted knowledge, issues remain related to how to design systems and setups capable of fully exploiting the latest hardware advances. The aim of the proposed research is to study and understand how to increase the perceived level of realism and sense of presence when remotely observing real places through VR headset displays. Hence, to produce a set of guidelines that give directions to system designers about how to optimize the display-camera setup to enhance performance, focusing on remote visual observation of real places. The outcome of this investigation represents unique knowledge that is believed to be very beneficial for better VR headset designs towards improved remote observation systems. To achieve the proposed goal, this thesis presents a thorough investigation of existing literature and previous researches, which is carried out systematically to identify the most important factors ruling realism, depth perception, comfort, and sense of presence in VR headset observation. Once identified, these factors are further discussed and assessed through a series of experiments and usability studies, based on a predefined set of research questions. More specifically, the role of familiarity with the observed place, the role of the environment characteristics shown to the viewer, and the role of the display used for the remote observation of the virtual environment are further investigated. To gain more insights, two usability studies are proposed with the aim of defining guidelines and best practices. The main outcomes from the two studies demonstrate that test users can experience an enhanced realistic observation when natural features, higher resolution displays, natural illumination, and high image contrast are used in Mobile VR. In terms of comfort, simple scene layouts and relaxing environments are considered ideal to reduce visual fatigue and eye strain. Furthermore, sense of presence increases when observed environments induce strong emotions, and depth perception improves in VR when several monocular cues such as lights and shadows are combined with binocular depth cues. Based on these results, this investigation then presents a focused evaluation on the outcomes and introduces an innovative eye-adapted High Dynamic Range (HDR) approach, which the author believes to be of great improvement in the context of remote observation when combined with eye-tracked VR headsets. Within this purpose, a third user study is proposed to compare static HDR and eye-adapted HDR observation in VR, to assess that the latter can improve realism, depth perception, sense of presence, and in certain cases even comfort. Results from this last study confirmed the author expectations, proving that eye-adapted HDR and eye tracking should be used to achieve best visual performances for remote observation in modern VR systems

    A Review and Selective Analysis of 3D Display Technologies for Anatomical Education

    Get PDF
    The study of anatomy is complex and difficult for students in both graduate and undergraduate education. Researchers have attempted to improve anatomical education with the inclusion of three-dimensional visualization, with the prevailing finding that 3D is beneficial to students. However, there is limited research on the relative efficacy of different 3D modalities, including monoscopic, stereoscopic, and autostereoscopic displays. This study analyzes educational performance, confidence, cognitive load, visual-spatial ability, and technology acceptance in participants using autostereoscopic 3D visualization (holograms), monoscopic 3D visualization (3DPDFs), and a control visualization (2D printed images). Participants were randomized into three treatment groups: holograms (n=60), 3DPDFs (n=60), and printed images (n=59). Participants completed a pre-test followed by a self-study period using the treatment visualization. Immediately following the study period, participants completed the NASA TLX cognitive load instrument, a technology acceptance instrument, visual-spatial ability instruments, a confidence instrument, and a post-test. Post-test results showed the hologram treatment group (Mdn=80.0) performed significantly better than both 3DPDF (Mdn=66.7, p=.008) and printed images (Mdn=66.7, p=.007). Participants in the hologram and 3DPDF treatment groups reported lower cognitive load compared to the printed image treatment (p \u3c .01). Participants also responded more positively towards the holograms than printed images (p \u3c .001). Overall, the holograms demonstrated significant learning improvement over printed images and monoscopic 3DPDF models. This finding suggests additional depth cues from holographic visualization, notably head-motion parallax and stereopsis, provide substantial benefit towards understanding spatial anatomy. The reduction in cognitive load suggests monoscopic and autostereoscopic 3D may utilize the visual system more efficiently than printed images, thereby reducing mental effort during the learning process. Finally, participants reported positive perceptions of holograms suggesting implementation of holographic displays would be met with enthusiasm from student populations. These findings highlight the need for additional studies regarding the effect of novel 3D technologies on learning performance

    A new head-mounted display-based augmented reality system in neurosurgical oncology: a study on phantom

    Get PDF
    Purpose: Benefits of minimally invasive neurosurgery mandate the development of ergonomic paradigms for neuronavigation. Augmented Reality (AR) systems can overcome the shortcomings of commercial neuronavigators. The aim of this work is to apply a novel AR system, based on a head-mounted stereoscopic video see-through display, as an aid in complex neurological lesion targeting. Effectiveness was investigated on a newly designed patient-specific head mannequin featuring an anatomically realistic brain phantom with embedded synthetically created tumors and eloquent areas. Materials and methods: A two-phase evaluation process was adopted in a simulated small tumor resection adjacent to Brocaâ\u80\u99s area. Phase I involved nine subjects without neurosurgical training in performing spatial judgment tasks. In Phase II, three surgeons were involved in assessing the effectiveness of the AR-neuronavigator in performing brain tumor targeting on a patient-specific head phantom. Results: Phase I revealed the ability of the AR scene to evoke depth perception under different visualization modalities. Phase II confirmed the potentialities of the AR-neuronavigator in aiding the determination of the optimal surgical access to the surgical target. Conclusions: The AR-neuronavigator is intuitive, easy-to-use, and provides three-dimensional augmented information in a perceptually-correct way. The system proved to be effective in guiding skin incision, craniotomy, and lesion targeting. The preliminary results encourage a structured study to prove clinical effectiveness. Moreover, our testing platform might be used to facilitate training in brain tumour resection procedures

    VR systems for memory assessment and depth perception

    Full text link
    La evolución de la tecnología de Realidad Virtual (RV) ha contribuido en todos los campos, incluyendo la psicología. Esta evolución implica mejoras tanto en hardware como en software, que permiten experiencias más inmersivas. En un entorno de RV los usuarios pueden percibir la sensación de "presencia" y sentirse "inmersos". Estas sensaciones son posibles utilizando HMDs. Hoy en día, el desarrollo de los HMDs se ha centrado en mejorar sus características técnicas para ofrecer inmersión total. En psicología, los entornos de RV son una herramienta de investigación. Hay algunas aplicaciones para evaluar la memoria espacial que utilizan métodos básicos de interacción. Sin embargo, sistemas de RV que incorporen estereoscopía y movimiento físico todavía no se han explotado en psicología. En esta tesis, se ha desarrollado un nuevo sistema de RV que combina características inmersivas, interactivas y de movimiento. El sistema de RV (tarea en un laberinto virtual) se ha utilizado para evaluar la memoria espacial y la percepción de profundidad. Se han integrado dos tipos diferentes de interacción: una basada en locomoción que consistió en pedalear en una bicicleta fija (condición1) y otra estacionaria usando un gamepad (condición2). El sistema integró dos tipos de visualización: 1) Oculus Rift (OR); 2) Una gran pantalla estéreo. Se diseñaron dos estudios. El primer estudio (N=89) evaluó la memoria espacial a corto plazo usando el OR y los dos tipos de interacción. Los resultados indican que existían diferencias significativas entre ambas condiciones. Los participantes que utilizaron la condición2 obtuvieron mejor rendimiento que los que utilizaron la tarea en la condición1. Sin embargo, no se encontraron diferencias significativas en las puntuaciones de satisfacción e interacción entre ambas condiciones. El desempeño en la tarea correlacionó con el desempeño en las pruebas neuropsicológicas clásicas, revelando la verosimilitud entre ellas. El segundo estudio (N=59) incluyó participantes con y sin estereopsis. Este estudio evaluó la percepción de profundidad comparando los dos sistemas de visualización. Los participantes realizaron la tarea usando la condición2. Los resultados mostraron que las diferentes características del sistema de visualización no influyeron en el rendimiento en la tarea entre los participantes con y sin estereopsis. Se encontraron diferencias significativas a favor del HMD entre las dos condiciones y entre los dos grupos de participantes respecto a la percepción de profundidad. Los participantes que no tenían estereopsis y no podían percibir la profundidad cuando utilizaban otros sistemas de visualización, tuvieron la ilusión de percepción de profundidad cuando utilizaron el OR. El estudio sugiere que para las personas que no tienen estereopsis, el seguimiento de la cabeza influye en gran medida en la experiencia 3D. Los resultados estadísticos de ambos estudios han demostrado que el sistema de RV desarrollado es una herramienta apropiada para evaluar la memoria espacial a corto plazo y la percepción de profundidad. Por lo tanto, los sistemas de RV que combinan inmersión total, interacción y movimiento pueden ser una herramienta útil para la evaluación de procesos cognitivos humanos como la memoria. De estos estudios se han extraído las siguientes conclusiones generales: 1) La tecnología de RV y la inmersión proporcionada por los actuales HMDs son herramientas adecuadas para aplicaciones psicológicas, en particular, la evaluación de la memoria espacial a corto plazo; 2) Un sistema de RV como el presentado podría ser utilizado como herramienta para evaluar o entrenar adultos en habilidades relacionadas con la memoria espacial a corto plazo; 3) Los dos tipos de interacción utilizados para la navegación en el laberinto virtual podrían ser útiles para su uso con diferentes colectivos; 4) El OR permite que los usuarios sin estereopsis puedan percibir lThe evolution of Virtual Reality (VR) technology has contributed in all fields, including psychology. This evolution involves improvements in hardware and software allowing more immersive experiences. In a VR environment users can perceive the sensation of "presence" and feel "immersed". These sensations are possible using VR devices as HMDs. Nowadays, the development of the HMDs has focused on improving their technical features to offer full immersion. In psychology, VR environments are research tools because they allow the use of new paradigms that are not possible to employ in a real environment. There are some applications for assessing spatial memory that use basic methods of HCI. However, VR systems that incorporate stereoscopy and physical movement have not yet been exploited in psychology. In this thesis, a novel VR system combining immersive, interactive and motion features was developed. This system was used for the assessment of the spatial memory and the evaluation of depth perception. For this system, a virtual maze task was designed and implemented. In this system, two different types of interaction were integrated: a locomotion-based interaction pedaling a fixed bicycle (condition1), and a stationary interaction using a gamepad (condition2). This system integrated two types of display systems: 1) The Oculus Rift; 2) A large stereo screen. Two studies were designed to determine the efficacy of the VR system using physical movement and immersion. The first study (N=89) assessed the spatial short term memory using the Oculus Rift and the two types of interaction The results showed that there were statistically significant differences between both conditions. The participants who performed the condition2 got better performance than participants who performed the condition1. However, there were no statistically significant differences in satisfaction and interaction scores between both conditions. The performance on the task correlated with the performance on other classical neuropsychological tests, revealing a verisimilitude between them. The second study (N=59) involved participants who had and who had not stereopsis. This study assessed the depth perception by comparing the two display systems. The participants performed the task using the condition2. The results showed that the different features of the display system did not influence the performance on the task between the participants with and without stereopsis. Statistically significant differences were found in favor of the HMD between the two conditions and between the two groups of participants regard to depth perception. The participants who did not have stereopsis and could not perceive the depth when they used other display systems (e.g. CAVE); however, they had the illusion of depth perception when they used the Oculus Rift. The study suggests that for the people who did not have stereopsis, the head tracking largely influences the 3D experience. The statistical results of both studies have proven that the VR system developed for this research is an appropriate tool to assess the spatial short-term memory and the depth perception. Therefore, the VR systems that combine full immersion, interaction and movement can be a helpful tool for the assessment of human cognitive processes as the memory. General conclusions from these studies are: 1) The VR technology and immersion provided by current HMDs are appropriate tools for psychological applications, in particular, the assessment of spatial short-term memory; 2) A VR system like the one presented in this thesis could be used as a tool to assess or train adults in skills related to spatial short-term memory; 3) The two types of interaction (condition1 and condition2) used for navigation within the virtual maze could be helpful to use with different collectives; 4) The Oculus Rift allows that the users without stereopsis can perceive the depth perception of 3D objects and have rich 3D experiences.L'evolució de la tecnologia de Realitat Virtual (RV) ha contribuït en tots els camps, incloent la psicologia. Aquesta evolució implica millores en el maquinari i el programari que permeten experiències més immersives. En un entorn de RV, els usuaris poden percebre la sensació de "presència" i sentir-se "immersos". Aquestes sensacions són possibles utilitzant HMDs. Avui dia, el desenvolupament dels HMDs s'ha centrat a millorar les seves característiques tècniques per oferir immersió plena. En la psicologia, els entorns de RV són eines de recerca. Hi ha algunes aplicacions per avaluar la memòria espacial que utilitzen mètodes bàsics d'interacció. Tanmateix, sistemes de RV que incorporen estereoscòpia i moviment físic no s'han explotat en psicologia. En aquesta tesi, s'ha desenvolupat un sistema de RV novell que combina immersió, interacció i moviment. El sistema (tasca en un laberint virtual) s'ha utilitzat per a l'avaluació de la memòria espacial i la percepció de profunditat. S'han integrat dos tipus d'interacció: una interacció basada en locomoció pedalejant una bicicleta fixa (condició1), i l'altra una interacció estacionària usant un gamepad (condició2). S'han integrat dos tipus de sistemes de pantalla: 1) L'Oculus Rift; 2) Una gran pantalla estereoscòpica. Dos estudis van ser dissenyats. El primer estudi (N=89) va avaluar la memòria a curt termini i espacial utilitzant l'Oculus Rift i els dos tipus d'interacció. Els resultats indiquen que hi havia diferències significatives entre les dues condicions. Els participants que van utilitzar la condició2 van obtenir millor rendiment que els participants que van utilitzar la condició1. Tanmateix, no hi havia diferències significatives dins satisfacció i puntuacions d'interacció entre les dues condicions. El rendiment de la tasca va correlacionar amb el rendiment en les proves neuropsicològiques clàssiques, revelant versemblança entre elles. El segon estudi (N=59) va implicar participants que van tenir i que van haver-hi no estereopsis. Aquest estudi va avaluar la percepció de profunditat comparant els dos sistemes de pantalla. Els participants realitzen la tasca utilitzant la condició2. Els resultats van mostrar que les diferents característiques del sistema de pantalla no va influir en el rendiment en la tasca entre els participants qui tenien i els qui no tenien estereopsis. Diferències significatives van ser trobades a favor del HMD entre les dues condicions i entre els dos grups de participants. Els participants que no van tenir estereopsis i no podien percebre la profunditat quan van utilitzar altres sistemes de pantalla (per exemple, CAVE), van tenir la il.lusió de percepció de profunditat quan van utilitzar l'Oculus Rift. L'estudi suggereix que per les persones que no van tenir estereopsis, el seguiment del cap influeix en gran mesura en l'experiència 3D. Els resultats estadístics dels dos estudis han provat que el sistema de RV desenvolupat per aquesta recerca és una eina apropiada per avaluar la memòria espacial a curt termini i la percepció de profunditat. Per això, els sistemes de RV que combinen immersió plena, interacció i moviment poden ser una eina útil per la avaluació de processos cognitius humans com la memòria Les conclusions generals que s'han extret d'aquests estudis, són les següents: 1) La tecnologia de RV i la immersió proporcionada pels HMDs són eines apropiades per aplicacions psicològiques, en particular, la avaluació de memòria espacial a curt termini; 2) Un sistema de RV com el presentat podria ser utilitzat com a eina per avaluar o entrenar adults en habilitats relacionades amb la memòria espacial a curt termini; 3) Els dos tipus d'interacció utilitzats per navegació dins del laberint virtual podrien ser útils per al seu ús amb diferent col.lectius; 3) L'Oculus Rift permet que els usuaris que no tenen estereopsis puguen percebre la percepció de profunditat dels objectes 3D i tenirCárdenas Delgado, SE. (2017). VR systems for memory assessment and depth perception [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/94629TESI

    X-ray vision at action space distances: depth perception in context

    Get PDF
    Accurate and usable x-ray vision has long been a goal in augmented reality (AR) research and development. X-ray vision, or the ability to comprehend location and object information when such is viewed through an opaque barrier, would be imminently useful in a variety of contexts, including industrial, disaster reconnaissance, and tactical applications. In order for x-ray vision to be a useful tool for many of these applications, it would need to extend operators’ perceptual awareness of the task or environment. The effectiveness with which x-ray vision can do this is of significant research interest and is a determinant of its usefulness in an application context. In substance, then, it is crucial to evaluate the effectiveness of x-ray vision—how does information presented through x-ray vision compare to real-world information? This approach requires narrowing as x-ray vision suffers from inherent limitations, analogous to viewing an object through a window. In both cases, information is presented beyond the local context, exists past an apparently solid object, and is limited by certain conditions. Further, in both cases, the naturally suggestive use cases occur over action space distances. These distances range from 1.5 to 30 meters and represent the area in which observers might contemplate immediate visually directed actions. These actions, simple tasks with a visual antecedent, represent action potentials for x-ray vision; in effect, x-ray vision extends an operators’ awareness and ability to visualize these actions into a new context. Thus, this work seeks to answer the question “Can a real window be replaced with an AR window?” This evaluation focuses on perceived object location, investigated through a series of experiments using visually directed actions as experimental measures. This approach leverages established methodology to investigate this topic by experimentally analyzing each of several distinct variables on a continuum between real-world depth perception and fully realized x-ray vision. It was found that a real window could not be replaced with an AR window without some loss of depth perception acuity and accuracy. However, no significant difference was found between a target viewed through an opaque wall and a target viewed through a real window

    Direct Manipulation Of Virtual Objects

    Get PDF
    Interacting with a Virtual Environment (VE) generally requires the user to correctly perceive the relative position and orientation of virtual objects. For applications requiring interaction in personal space, the user may also need to accurately judge the position of the virtual object relative to that of a real object, for example, a virtual button and the user\u27s real hand. This is difficult since VEs generally only provide a subset of the cues experienced in the real world. Complicating matters further, VEs presented by currently available visual displays may be inaccurate or distorted due to technological limitations. Fundamental physiological and psychological aspects of vision as they pertain to the task of object manipulation were thoroughly reviewed. Other sensory modalities--proprioception, haptics, and audition--and their cross-interactions with each other and with vision are briefly discussed. Visual display technologies, the primary component of any VE, were canvassed and compared. Current applications and research were gathered and categorized by different VE types and object interaction techniques. While object interaction research abounds in the literature, pockets of research gaps remain. Direct, dexterous, manual interaction with virtual objects in Mixed Reality (MR), where the real, seen hand accurately and effectively interacts with virtual objects, has not yet been fully quantified. An experimental test bed was designed to provide the highest accuracy attainable for salient visual cues in personal space. Optical alignment and user calibration were carefully performed. The test bed accommodated the full continuum of VE types and sensory modalities for comprehensive comparison studies. Experimental designs included two sets, each measuring depth perception and object interaction. The first set addressed the extreme end points of the Reality-Virtuality (R-V) continuum--Immersive Virtual Environment (IVE) and Reality Environment (RE). This validated, linked, and extended several previous research findings, using one common test bed and participant pool. The results provided a proven method and solid reference points for further research. The second set of experiments leveraged the first to explore the full R-V spectrum and included additional, relevant sensory modalities. It consisted of two full-factorial experiments providing for rich data and key insights into the effect of each type of environment and each modality on accuracy and timeliness of virtual object interaction. The empirical results clearly showed that mean depth perception error in personal space was less than four millimeters whether the stimuli presented were real, virtual, or mixed. Likewise, mean error for the simple task of pushing a button was less than four millimeters whether the button was real or virtual. Mean task completion time was less than one second. Key to the high accuracy and quick task performance time observed was the correct presentation of the visual cues, including occlusion, stereoscopy, accommodation, and convergence. With performance results already near optimal level with accurate visual cues presented, adding proprioception, audio, and haptic cues did not significantly improve performance. Recommendations for future research include enhancement of the visual display and further experiments with more complex tasks and additional control variables

    The worse eye revisited: Evaluating the impact of asymmetric peripheral vision loss on everyday function

    Get PDF
    In instances of asymmetric peripheral vision loss (e.g., glaucoma), binocular performance on simple psychophysical tasks (e.g., static threshold perimetry) is well-predicted by the better seeing eye alone. This suggests that peripheral vision is largely ‘better-eye limited’. In the present study, we examine whether this also holds true for real-world tasks, or whether even a degraded fellow eye contributes important information for tasks of daily living. Twelve normally-sighted adults performed an everyday visually-guided action (finding a mobile phone) in a virtual-reality domestic environment, while levels of peripheral vision loss were independently manipulated in each eye (gaze-contingent blur). The results showed that even when vision in the better eye was held constant, participants were significantly slower to locate the target, and made significantly more head- and eye-movements, as peripheral vision loss in the worse eye increased. A purely unilateral peripheral impairment increased response times by up to 25%, although the effect of bilateral vision loss was much greater (>200%). These findings indicate that even a degraded visual field still contributes important information for performing everyday visually-guided actions. This may have clinical implications for how patients with visual field loss are managed or prioritized, and for our understanding of how binocular information in the periphery is integrated
    corecore