3,098 research outputs found

    A Systematic Review of Extended Reality (XR) for Understanding and Augmenting Vision Loss

    Full text link
    Over the past decade, extended reality (XR) has emerged as an assistive technology not only to augment residual vision of people losing their sight but also to study the rudimentary vision restored to blind people by a visual neuroprosthesis. To make the best use of these emerging technologies, it is valuable and timely to understand the state of this research and identify any shortcomings that are present. Here we present a systematic literature review of 227 publications from 106 different venues assessing the potential of XR technology to further visual accessibility. In contrast to other reviews, we sample studies from multiple scientific disciplines, focus on augmentation of a person's residual vision, and require studies to feature a quantitative evaluation with appropriate end users. We summarize prominent findings from different XR research areas, show how the landscape has changed over the last decade, and identify scientific gaps in the literature. Specifically, we highlight the need for real-world validation, the broadening of end-user participation, and a more nuanced understanding of the suitability and usability of different XR-based accessibility aids. By broadening end-user participation to early stages of the design process and shifting the focus from behavioral performance to qualitative assessments of usability, future research has the potential to develop XR technologies that may not only allow for studying vision loss, but also enable novel visual accessibility aids with the potential to impact the lives of millions of people living with vision loss

    Reducing Grip Uncertainty During Initial Prosthetic Hand Use Improves Eye-Hand Coordination and Lowers Mental Workload

    Get PDF
    The reliance on vision to control a myoelectric prosthesis is cognitively burdensome and contributes to device abandonment. The feeling of uncertainty when gripping an object is thought to be the cause of this overreliance on vision in hand-related actions. We explored if experimentally reducing grip uncertainty alters the visuomotor control and mental workload experienced during initial prosthesis use. In a repeated measures design, twenty-one able-bodied participants took part in a pouring task across three conditions: (a) using their anatomical hand, (b) using a myoelectric prosthetic hand simulator, and (c) using a myoelectric prosthetic hand simulator with Velcro attached to reduce grip uncertainty. Performance, gaze behaviour (using mobile eye-tracking) and self-reported mental workload, was measured. Results showed that using a prosthesis (with or without Velcro) slowed task performance, impaired typical eye-hand coordination and increased mental workload compared to anatomic hand control. However, when using the prosthesis with Velcro, participants displayed better prosthesis control, more effective eye-hand coordination and reduced mental workload compared to when using the prosthesis without Velcro. These positive results indicate that reducing grip uncertainty could be a useful tool for encouraging more effective prosthesis control strategies in the early stages of prosthetic hand learning

    Egocentric Computer Vision and Machine Learning for Simulated Prosthetic Vision

    Get PDF
    Las prótesis visuales actuales son capaces de proporcionar percepción visual a personas con cierta ceguera. Sin pasar por la parte dañada del camino visual, la estimulación eléctrica en la retina o en el sistema nervioso provoca percepciones puntuales conocidas como “fosfenos”. Debido a limitaciones fisiológicas y tecnológicas, la información que reciben los pacientes tiene una resolución muy baja y un campo de visión y rango dinámico reducido afectando seriamente la capacidad de la persona para reconocer y navegar en entornos desconocidos. En este contexto, la inclusión de nuevas técnicas de visión por computador es un tema clave activo y abierto. En esta tesis nos centramos especialmente en el problema de desarrollar técnicas para potenciar la información visual que recibe el paciente implantado y proponemos diferentes sistemas de visión protésica simulada para la experimentación.Primero, hemos combinado la salida de dos redes neuronales convolucionales para detectar bordes informativos estructurales y siluetas de objetos. Demostramos cómo se pueden reconocer rápidamente diferentes escenas y objetos incluso en las condiciones restringidas de la visión protésica. Nuestro método es muy adecuado para la comprensión de escenas de interiores comparado con los métodos tradicionales de procesamiento de imágenes utilizados en prótesis visuales.Segundo, presentamos un nuevo sistema de realidad virtual para entornos de visión protésica simulada más realistas usando escenas panorámicas, lo que nos permite estudiar sistemáticamente el rendimiento de la búsqueda y reconocimiento de objetos. Las escenas panorámicas permiten que los sujetos se sientan inmersos en la escena al percibir la escena completa (360 grados).En la tercera contribución demostramos cómo un sistema de navegación de realidad aumentada para visión protésica ayuda al rendimiento de la navegación al reducir el tiempo y la distancia para alcanzar los objetivos, incluso reduciendo significativamente el número de colisiones de obstáculos. Mediante el uso de un algoritmo de planificación de ruta, el sistema encamina al sujeto a través de una ruta más corta y sin obstáculos. Este trabajo está actualmente bajo revisión.En la cuarta contribución, evaluamos la agudeza visual midiendo la influencia del campo de visión con respecto a la resolución espacial en prótesis visuales a través de una pantalla montada en la cabeza. Para ello, usamos la visión protésica simulada en un entorno de realidad virtual para simular la experiencia de la vida real al usar una prótesis de retina. Este trabajo está actualmente bajo revisión.Finalmente, proponemos un modelo de Spiking Neural Network (SNN) que se basa en mecanismos biológicamente plausibles y utiliza un esquema de aprendizaje no supervisado para obtener mejores algoritmos computacionales y mejorar el rendimiento de las prótesis visuales actuales. El modelo SNN propuesto puede hacer uso de la señal de muestreo descendente de la unidad de procesamiento de información de las prótesis retinianas sin pasar por el análisis de imágenes retinianas, proporcionando información útil a los ciegos. Esté trabajo está actualmente en preparación.<br /

    Sense of agency primes manual motor responses

    Get PDF
    Perceiving the body influences how we perceive and respond to stimuli in the world. We investigated the respective effects of different components of bodily representation - the senses of ownership and agency - on responses to simple visual stimuli. Participants viewed a video image of their hand on a computer monitor presented either in real time, or with a systematic delay. Blocks began with an induction period in which the index finger was (i) brushed, (ii) passively moved, or (iii) actively moved by the participant. Subjective reports showed that the sense of ownership over the seen hand emerged with synchronous video, regardless of the type of induction, whereas the sense of agency over the hand emerged only following synchronous video with active movement. Following induction, participants responded as quickly as possible to the onset of visual stimuli near the hand by pressing a button with their other hand. Reaction time was significantly speeded when participants had a sense of agency over their seen hand. This effect was eliminated when participants responded vocally, suggesting that it reflects priming of manual responses, rather than enhanced stimulus detection. These results suggest that vision of one's own hand and, specifically, the sense of agency over that hand primes manual motor responses

    Image Content Enhancement Through Salient Regions Segmentation for People With Color Vision Deficiencies

    Get PDF
    Color vision deficiencies affect visual perception of colors and, more generally, color images. Several sciences such as genetics, biology, medicine, and computer vision are involved in studying and analyzing vision deficiencies. As we know from visual saliency findings, human visual system tends to fix some specific points and regions of the image in the first seconds of observation summing up the most important and meaningful parts of the scene. In this article, we provide some studies about human visual system behavior differences between normal and color vision-deficient visual systems. We eye-tracked the human fixations in first 3 seconds of observation of color images to build real fixation point maps. One of our contributions is to detect the main differences between the aforementioned human visual systems related to color vision deficiencies by analyzing real fixation maps among people with and without color vision deficiencies. Another contribution is to provide a method to enhance color regions of the image by using a detailed color mapping of the segmented salient regions of the given image. The segmentation is performed by using the difference between the original input image and the corresponding color blind altered image. A second eye-tracking of color blind people with the images enhanced by using recoloring of segmented salient regions reveals that the real fixation points are then more coherent (up to 10%) with the normal visual system. The eye-tracking data collected during our experiments are in a publicly available dataset called Eye-Tracking of Color Vision Deficiencies

    The Speed, Precision and Accuracy of Human Multisensory Perception following Changes to the Visual Sense

    Get PDF
    Human adults can combine information from multiple senses to improve their perceptual judgments. Visual and multisensory experience plays an important role in the development of multisensory integration, however it is unclear to what extent changes in vision impact multisensory processing later in life. In particular, it is not known whether adults account for changes to the relative reliability of their senses, following sensory loss, treatment or training. Using psychophysical methods, this thesis studied the multisensory processing of individuals experiencing changes to the visual sense. Chapters 2 and 3 assessed whether patients implanted with a retinal prosthesis (having been blinded by a retinal degenerative disease) could use this new visual signal with non-visual information to improve their speed or precision on multisensory tasks. Due to large differences between the reliabilities of the visual and non-visual cues, patients were not always able to benefit from the new visual signal. Chapter 4 assessed whether patients with degenerative visual loss adjust the weight given to visual and non-visual cues during audio-visual localization as their relative reliabilities change. Although some patients adjusted their reliance on vision across the visual field in line with predictions based on cue relative reliability, others - patients with visual loss limited to their central visual field only - did not. Chapter 5 assessed whether training with either more reliable or less reliable visual feedback could enable normally sighted adults to overcome an auditory localization bias. Findings suggest that visual information, irrespective of reliability, can be used to overcome at least some non-visual biases. In summary, this thesis documents multisensory changes following changes to the visual sense. The results improve our understanding of adult multisensory plasticity and have implications for successful treatments and rehabilitation following sensory loss

    Photovoltaic Restoration of Central Vision in Atrophic Age-Related Macular Degeneration

    Get PDF
    PURPOSE: Loss of photoreceptors in atrophic age-related macular degeneration results in severe visual impairment, although some peripheral vision is retained. To restore central vision without compromising the residual peripheral field, we developed a wireless photovoltaic retinal implant (PRIMA; Pixium Vision, Paris, France) in which pixels convert images projected from video glasses using near-infrared light into electric current to stimulate the nearby inner retinal neurons. DESIGN: We carried out a first-in-human clinical trial to test the safety and efficacy of the prosthesis in patients with geographic atrophy (ClinicalTrials.gov identifier, NCT03333954). PARTICIPANTS: Five patients with geographic atrophy zone of at least 3 optic disc diameters, no foveal light perception, and best-corrected visual acuity of 20/400 to 20/1000 in the worse-seeing study eye. METHODS: The 2-mm wide, 30-μm thick chip, containing 378 pixels (each 100 μm in diameter), was implanted subretinally in the area of atrophy (absolute scotoma). MAIN OUTCOME MEASURES: Anatomic outcomes were assessed with fundus photography and OCT for up to 12 months of follow-up. Prosthetic vision was assessed by mapping light perception, bar orientation, letter recognition, and Landolt C acuity. RESULTS: In all patients, the prosthesis was implanted successfully under the macula, although in 2 patients, it was implanted in unintended locations: within the choroid and off center by 2 mm. All 5 patients could perceive white-yellow prosthetic visual patterns with adjustable brightness in the previous scotomata. The 3 with optimal placement of the implant demonstrated prosthetic acuity of 20/460 to 20/550, and the patient with the off-center implant demonstrated 20/800 acuity. Residual natural acuity did not decrease after implantation in any patient. CONCLUSIONS: Implantation of the PRIMA did not decrease the residual natural acuity, and it restored visual sensitivity in the former scotoma in each of the 5 patients. In 3 patients with the proper placement of the chip, prosthetic visual acuity was only 10% to 30% less than the level expected from the pixel pitch (20/420). Therefore, the use of optical or electronic magnification in the glasses as well as smaller pixels in future implants may improve visual acuity even further
    corecore