64 research outputs found

    Egocentric Computer Vision and Machine Learning for Simulated Prosthetic Vision

    Get PDF
    Las prótesis visuales actuales son capaces de proporcionar percepción visual a personas con cierta ceguera. Sin pasar por la parte dañada del camino visual, la estimulación eléctrica en la retina o en el sistema nervioso provoca percepciones puntuales conocidas como “fosfenos”. Debido a limitaciones fisiológicas y tecnológicas, la información que reciben los pacientes tiene una resolución muy baja y un campo de visión y rango dinámico reducido afectando seriamente la capacidad de la persona para reconocer y navegar en entornos desconocidos. En este contexto, la inclusión de nuevas técnicas de visión por computador es un tema clave activo y abierto. En esta tesis nos centramos especialmente en el problema de desarrollar técnicas para potenciar la información visual que recibe el paciente implantado y proponemos diferentes sistemas de visión protésica simulada para la experimentación.Primero, hemos combinado la salida de dos redes neuronales convolucionales para detectar bordes informativos estructurales y siluetas de objetos. Demostramos cómo se pueden reconocer rápidamente diferentes escenas y objetos incluso en las condiciones restringidas de la visión protésica. Nuestro método es muy adecuado para la comprensión de escenas de interiores comparado con los métodos tradicionales de procesamiento de imágenes utilizados en prótesis visuales.Segundo, presentamos un nuevo sistema de realidad virtual para entornos de visión protésica simulada más realistas usando escenas panorámicas, lo que nos permite estudiar sistemáticamente el rendimiento de la búsqueda y reconocimiento de objetos. Las escenas panorámicas permiten que los sujetos se sientan inmersos en la escena al percibir la escena completa (360 grados).En la tercera contribución demostramos cómo un sistema de navegación de realidad aumentada para visión protésica ayuda al rendimiento de la navegación al reducir el tiempo y la distancia para alcanzar los objetivos, incluso reduciendo significativamente el número de colisiones de obstáculos. Mediante el uso de un algoritmo de planificación de ruta, el sistema encamina al sujeto a través de una ruta más corta y sin obstáculos. Este trabajo está actualmente bajo revisión.En la cuarta contribución, evaluamos la agudeza visual midiendo la influencia del campo de visión con respecto a la resolución espacial en prótesis visuales a través de una pantalla montada en la cabeza. Para ello, usamos la visión protésica simulada en un entorno de realidad virtual para simular la experiencia de la vida real al usar una prótesis de retina. Este trabajo está actualmente bajo revisión.Finalmente, proponemos un modelo de Spiking Neural Network (SNN) que se basa en mecanismos biológicamente plausibles y utiliza un esquema de aprendizaje no supervisado para obtener mejores algoritmos computacionales y mejorar el rendimiento de las prótesis visuales actuales. El modelo SNN propuesto puede hacer uso de la señal de muestreo descendente de la unidad de procesamiento de información de las prótesis retinianas sin pasar por el análisis de imágenes retinianas, proporcionando información útil a los ciegos. Esté trabajo está actualmente en preparación.<br /

    Real-world indoor mobility with simulated prosthetic vision:The benefits and feasibility of contour-based scene simplification at different phosphene resolutions

    Get PDF
    Contains fulltext : 246314.pdf (Publisher’s version ) (Open Access)Neuroprosthetic implants are a promising technology for restoring some form of vision in people with visual impairments via electrical neurostimulation in the visual pathway. Although an artificially generated prosthetic percept is relatively limited compared with normal vision, it may provide some elementary perception of the surroundings, re-enabling daily living functionality. For mobility in particular, various studies have investigated the benefits of visual neuroprosthetics in a simulated prosthetic vision paradigm with varying outcomes. The previous literature suggests that scene simplification via image processing, and particularly contour extraction, may potentially improve the mobility performance in a virtual environment. In the current simulation study with sighted participants, we explore both the theoretically attainable benefits of strict scene simplification in an indoor environment by controlling the environmental complexity, as well as the practically achieved improvement with a deep learning-based surface boundary detection implementation compared with traditional edge detection. A simulated electrode resolution of 26 x 26 was found to provide sufficient information for mobility in a simple environment. Our results suggest that, for a lower number of implanted electrodes, the removal of background textures and within-surface gradients may be beneficial in theory. However, the deep learning-based implementation for surface boundary detection did not improve mobility performance in the current study. Furthermore, our findings indicate that, for a greater number of electrodes, the removal of within-surface gradients and background textures may deteriorate, rather than improve, mobility. Therefore, finding a balanced amount of scene simplification requires a careful tradeoff between informativity and interpretability that may depend on the number of implanted electrodes.14 p

    Chronic stability of a neuroprosthesis comprising multiple adjacent Utah arrays in monkeys

    Get PDF
    Objective. Electrical stimulation of visual cortex via a neuroprosthesis induces the perception of dots of light (\u27phosphenes\u27), potentially allowing recognition of simple shapes even after decades of blindness. However, restoration of functional vision requires large numbers of electrodes, and chronic, clinical implantation of intracortical electrodes in the visual cortex has only been achieved using devices of up to 96 channels. We evaluated the efficacy and stability of a 1024-channel neuroprosthesis system in non-human primates (NHPs) over more than 3 years to assess its suitability for long-term vision restoration. Approach. We implanted 16 microelectrode arrays (Utah arrays) consisting of 8 x 8 electrodes with iridium oxide tips in the primary visual cortex (V1) and visual area 4 (V4) of two sighted macaques. We monitored the animals\u27 health and measured electrode impedances and neuronal signal quality by calculating signal-to-noise ratios of visually driven neuronal activity, peak-to-peak voltages of the waveforms of action potentials, and the number of channels with high-amplitude signals. We delivered cortical microstimulation and determined the minimum current that could be perceived, monitoring the number of channels that successfully yielded phosphenes. We also examined the influence of the implant on a visual task after 2-3 years of implantation and determined the integrity of the brain tissue with a histological analysis 3-3.5 years post-implantation. Main results. The monkeys remained healthy throughout the implantation period and the device retained its mechanical integrity and electrical conductivity. However, we observed decreasing signal quality with time, declining numbers of phosphene-evoking electrodes, decreases in electrode impedances, and impaired performance on a visual task at visual field locations corresponding to implanted cortical regions. Current thresholds increased with time in one of the two animals. The histological analysis revealed encapsulation of arrays and cortical degeneration. Scanning electron microscopy on one array revealed degradation of IrOx coating and higher impedances for electrodes with broken tips. Significance. Long-term implantation of a high-channel-count device in NHP visual cortex was accompanied by deformation of cortical tissue and decreased stimulation efficacy and signal quality over time. We conclude that improvements in device biocompatibility and/or refinement of implantation techniques are needed before future clinical use is feasible

    A Systematic Review of Extended Reality (XR) for Understanding and Augmenting Vision Loss

    Full text link
    Over the past decade, extended reality (XR) has emerged as an assistive technology not only to augment residual vision of people losing their sight but also to study the rudimentary vision restored to blind people by a visual neuroprosthesis. To make the best use of these emerging technologies, it is valuable and timely to understand the state of this research and identify any shortcomings that are present. Here we present a systematic literature review of 227 publications from 106 different venues assessing the potential of XR technology to further visual accessibility. In contrast to other reviews, we sample studies from multiple scientific disciplines, focus on augmentation of a person's residual vision, and require studies to feature a quantitative evaluation with appropriate end users. We summarize prominent findings from different XR research areas, show how the landscape has changed over the last decade, and identify scientific gaps in the literature. Specifically, we highlight the need for real-world validation, the broadening of end-user participation, and a more nuanced understanding of the suitability and usability of different XR-based accessibility aids. By broadening end-user participation to early stages of the design process and shifting the focus from behavioral performance to qualitative assessments of usability, future research has the potential to develop XR technologies that may not only allow for studying vision loss, but also enable novel visual accessibility aids with the potential to impact the lives of millions of people living with vision loss
    corecore