2,104 research outputs found

    Egocentric Computer Vision and Machine Learning for Simulated Prosthetic Vision

    Get PDF
    Las prótesis visuales actuales son capaces de proporcionar percepción visual a personas con cierta ceguera. Sin pasar por la parte dañada del camino visual, la estimulación eléctrica en la retina o en el sistema nervioso provoca percepciones puntuales conocidas como “fosfenos”. Debido a limitaciones fisiológicas y tecnológicas, la información que reciben los pacientes tiene una resolución muy baja y un campo de visión y rango dinámico reducido afectando seriamente la capacidad de la persona para reconocer y navegar en entornos desconocidos. En este contexto, la inclusión de nuevas técnicas de visión por computador es un tema clave activo y abierto. En esta tesis nos centramos especialmente en el problema de desarrollar técnicas para potenciar la información visual que recibe el paciente implantado y proponemos diferentes sistemas de visión protésica simulada para la experimentación.Primero, hemos combinado la salida de dos redes neuronales convolucionales para detectar bordes informativos estructurales y siluetas de objetos. Demostramos cómo se pueden reconocer rápidamente diferentes escenas y objetos incluso en las condiciones restringidas de la visión protésica. Nuestro método es muy adecuado para la comprensión de escenas de interiores comparado con los métodos tradicionales de procesamiento de imágenes utilizados en prótesis visuales.Segundo, presentamos un nuevo sistema de realidad virtual para entornos de visión protésica simulada más realistas usando escenas panorámicas, lo que nos permite estudiar sistemáticamente el rendimiento de la búsqueda y reconocimiento de objetos. Las escenas panorámicas permiten que los sujetos se sientan inmersos en la escena al percibir la escena completa (360 grados).En la tercera contribución demostramos cómo un sistema de navegación de realidad aumentada para visión protésica ayuda al rendimiento de la navegación al reducir el tiempo y la distancia para alcanzar los objetivos, incluso reduciendo significativamente el número de colisiones de obstáculos. Mediante el uso de un algoritmo de planificación de ruta, el sistema encamina al sujeto a través de una ruta más corta y sin obstáculos. Este trabajo está actualmente bajo revisión.En la cuarta contribución, evaluamos la agudeza visual midiendo la influencia del campo de visión con respecto a la resolución espacial en prótesis visuales a través de una pantalla montada en la cabeza. Para ello, usamos la visión protésica simulada en un entorno de realidad virtual para simular la experiencia de la vida real al usar una prótesis de retina. Este trabajo está actualmente bajo revisión.Finalmente, proponemos un modelo de Spiking Neural Network (SNN) que se basa en mecanismos biológicamente plausibles y utiliza un esquema de aprendizaje no supervisado para obtener mejores algoritmos computacionales y mejorar el rendimiento de las prótesis visuales actuales. El modelo SNN propuesto puede hacer uso de la señal de muestreo descendente de la unidad de procesamiento de información de las prótesis retinianas sin pasar por el análisis de imágenes retinianas, proporcionando información útil a los ciegos. Esté trabajo está actualmente en preparación.<br /

    RGB-D Scene Representations for Prosthetic Vision

    Get PDF
    This thesis presents a new approach to scene representation for prosthetic vision. Structurally salient information from the scene is conveyed through the prosthetic vision display. Given the low resolution and dynamic range of the display, this enables robust identification and reliable interpretation of key structural features that are missed when using standard appearance-based scene representations. Specifically, two different types of salient structure are investigated: salient edge structure, for depiction of scene shape to the user; and salient object structure, for emulation of biological attention deployment when viewing a scene. This thesis proposes and evaluates novel computer vision algorithms for extracting salient edge and salient object structure from RGB-D input. Extraction of salient edge structure from the scene is first investigated through low-level analysis of surface shape. Our approach is based on the observation that regions of irregular surface shape, such as the boundary between the wall and the floor, tend to be more informative of scene structure than uniformly shaped regions. We detect these surface irregularities through multi-scale analysis of iso-disparity contour orientations, providing a real time method that robustly identifies important scene structure. This approach is then extended by using a deep CNN to learn high level information for distinguishing salient edges from structural texture. A novel depth input encoding called the depth surface descriptor (DSD) is presented, which better captures scene geometry that corresponds to salient edges, improving the learned model. These methods provide robust detection of salient edge structure in the scene. The detection of salient object structure is first achieved by noting that salient objects often have contrasting shape from their surroundings. Contrasting shape in the depth image is captured through the proposed histogram of surface orientations (HOSO) feature. This feature is used to modulate depth and colour contrast in a saliency detection framework, improving the precision of saliency seed regions and through this the accuracy of the final detection. After this, a novel formulation of structural saliency is introduced based on the angular measure of local background enclosure (LBE). This formulation addresses fundamental limitations of depth contrast methods and is not reliant on foreground depth contrast in the scene. Saliency is instead measured through the degree to which a candidate patch exhibits foreground structure. The effectiveness of the proposed approach is evaluated through both standard datasets as well as user studies that measure the contribution of structure-based representations. Our methods are found to more effectively measure salient structure in the scene than existing methods. Our approach results in improved performance compared to standard methods during practical use of an implant display

    Patient-Specific Implants in Musculoskeletal (Orthopedic) Surgery

    Get PDF
    Most of the treatments in medicine are patient specific, aren’t they? So why should we bother with individualizing implants if we adapt our therapy to patients anyway? Looking at the neighboring field of oncologic treatment, you would not question the fact that individualization of tumor therapy with personalized antibodies has led to the thriving of this field in terms of success in patient survival and positive responses to alternatives for conventional treatments. Regarding the latest cutting-edge developments in orthopedic surgery and biotechnology, including new imaging techniques and 3D-printing of bone substitutes as well as implants, we do have an armamentarium available to stimulate the race for innovation in medicine. This Special Issue of Journal of Personalized Medicine will gather all relevant new and developed techniques already in clinical practice. Examples include the developments in revision arthroplasty and tumor (pelvic replacement) surgery to recreate individual defects, individualized implants for primary arthroplasty to establish physiological joint kinematics, and personalized implants in fracture treatment, to name but a few

    Wayfinding with Simulated Prosthetic Vision: Performance comparison with regular and structure-enhanced renderings

    Get PDF
    International audienceIn this study, we used a simulation of upcoming low-resolution visual neuroprostheses to evaluate the benefit of embedded computer vision techniques in a wayfinding task. We showed that augmenting the classical phosphene rendering with the basic structure of the environment - displaying the ground plane with a different level of brightness - increased both wayfinding performance and cognitive mapping. In spite of the low resolution of current and upcoming visual implants, the improvement of these cognitive functions may already be possible with embedded artificial vision algorithms

    The brain as image processor and generator:towards function-restoring brain-computer-interfaces

    Get PDF
    As neuroscientists are slowly unraveling the mysteries of the brain, neurotechnology like brain-computer-interfaces (BCIs) might become a new standard for medical applications in those with brain injuries. BCIs allow for direct communication between the brain and a device, and could potentially restore links that are broken due to brain damage. In addition, a better understanding of the human mind and its mechanisms could greatly boost the success of these devices. This dissertation features (high-field) functional magnetic resonance imaging (fMRI) to study human cognitive functioning, as fMRI allows for studying the brain of living humans in great spatial detail. Firstly, the dissertation describes how well brain regions that are important for visual perception can be located between individuals. Some of these regions are in part responsible for recognizing objects like faces, bodies, places and motion. Secondly, differences in functional organization of the brain were explored between individuals by simulating the placement of a visual cortical prosthesis. Such a prosthesis can bypass the (broken) connections between the eye and brain in blind people, and potentially restore a rudimentary form of vision. Finally, new techniques were presented that show that visual perception and mental imagery are closely related, and allow for reading letter shapes directly from the mind. Together, this dissertation adds new foundations for the development of neurotechnological applications

    A Systematic Review of Extended Reality (XR) for Understanding and Augmenting Vision Loss

    Full text link
    Over the past decade, extended reality (XR) has emerged as an assistive technology not only to augment residual vision of people losing their sight but also to study the rudimentary vision restored to blind people by a visual neuroprosthesis. To make the best use of these emerging technologies, it is valuable and timely to understand the state of this research and identify any shortcomings that are present. Here we present a systematic literature review of 227 publications from 106 different venues assessing the potential of XR technology to further visual accessibility. In contrast to other reviews, we sample studies from multiple scientific disciplines, focus on augmentation of a person's residual vision, and require studies to feature a quantitative evaluation with appropriate end users. We summarize prominent findings from different XR research areas, show how the landscape has changed over the last decade, and identify scientific gaps in the literature. Specifically, we highlight the need for real-world validation, the broadening of end-user participation, and a more nuanced understanding of the suitability and usability of different XR-based accessibility aids. By broadening end-user participation to early stages of the design process and shifting the focus from behavioral performance to qualitative assessments of usability, future research has the potential to develop XR technologies that may not only allow for studying vision loss, but also enable novel visual accessibility aids with the potential to impact the lives of millions of people living with vision loss

    Integration of Assistive Technologies into 3D Simulations: Exploratory Studies

    Get PDF
    Virtual worlds and environments have many purposes, ranging from games to scientific research. However, universal accessibility features in such virtual environments are limited. As the impairment prevalence rate increases yearly, so does the research interests in the field of assistive technologies. This work introduces research in assistive technologies and presents three software developments that explore the integration of assistive technologies within virtual environments, with a strong focus on Brain-Computer Interfaces. An accessible gaming system, a hands-free navigation software system, and a Brain-Computer Interaction plugin have been developed to study the capabilities of accessibility features within virtual 3D environments. Details of the specification, design, and implementation of these software applications are presented in the thesis. Observations and preliminary results as well as directions of future work are also included

    FeetBack – Redirecting touch sensation from a prosthetic hand to the human foot

    Get PDF
    Introduction: Adding sensory feedback to myoelectric prosthetic hands was shown to enhance the user experience in terms of controllability and device embodiment. Often this is realized non-invasively by adding devices, such as actuators or electrodes, within the prosthetic shaft to deliver the desired feedback. However, adding a feedback system in the socket adds more weight, steals valuable space, and may interfere with myoelectric signals. To circumvent said drawbacks we tested for the first time if force feedback from a prosthetic hand could be redirected to another similarly sensitive part of the body: the foot. Methods: We developed a vibrotactile insole that vibrates depending on the sensed force on the prosthetic fingers. This self-controlled clinical pilot trial included four experienced users of myoelectric prostheses. The participants solved two types of tasks with the artificial hands: 1) sorting objects depending on their plasticity with the feedback insole but without audio-visual feedback, and 2) manipulating fragile, heavy, and delicate objects with and without the feedback insole. The sorting task was evaluated with Goodman-Kruskal’s gamma for ranked correlation. The manipulation tasks were assessed by the success rate. Results: The results from the sorting task with vibrotactile feedback showed a substantial positive effect. The success rates for manipulation tasks with fragile and heavy objects were high under both conditions (feedback on or off, respectively). The manipulation task with delicate objects revealed inferior success with feedback in three of four participants. Conclusion: We introduced a novel approach to touch sensation in myoelectric prostheses. The results for the sorting task and the manipulation tasks diverged. This is likely linked to the availability of various feedback sources. Our results for redirected feedback to the feet fall in line with previous similar studies that applied feedback to the residual arm

    Doctor of Philosophy

    Get PDF
    dissertationThis dissertation provides an in-depth evaluation of microstimulation of the primary visual cortex (V1) using chronically implanted Utah Electrode Arrays (UEAs) in macaque monkeys for use as a visual prosthesis. Within the scope of this dissertation are several significant contributions. First, a minimally invasive and robust device for head fixation was developed. In comparison to other available designs, this device improved long-term outcomes by providing a stronger, less invasive interface that reduced the risk of infection. This device made it possible to acquire chronic microstimulation data in macaque monkeys. It has been tested on three animals and has provided a stable interface for over two years. Second, this dissertation is the first to describe the factors influencing the performance and safety of microstimulation of V1 with the UEA. Two UEAs were implanted in V1 of two macaque monkeys, and experiments were performed several months following implantation. The electrical and recording properties of the electrodes and the high-resolution visuotopic organization of V1 were measured. In addition, threshold stimulation levels that evoked behavioural responses using single electrodes were determined. Periodic microstimulation at currents up to 96 pA did not impair the ability to record neural signals and did not affect the animal's vision where the UEAs were implanted. It was discovered, however, that microstimulation at these levels evoked behavioural responses on only 8 of 82 systematically stimulated electrodes. It was suggested that the ability to evoke behavioral responses may depend on the location of the electrode tips within the cortical layers of V1, the distance of the electrode tips to neuronal somata, and the inability of nonhuman primates to recognize and respond to a generalized set of evoked percepts. Finally, this dissertation is the first to describe the spatial and temporal characteristics of microstimulation of V1 with the UEA over chronic time periods. Two years after implantation, it was found that consistent behavioural responses could be evoked during simultaneous stimulation of multiple contiguous electrodes. Saccades to electrically-evoked targets using groups of nine electrodes showed that the animal could discriminate spatially distinct percepts with a resolution comparable to the current epiretinal prostheses. These results demonstrate chronic perceptual functionality and provide evidence for the feasibility of a UEA-based visual prosthesis for the blind

    Successful tactile based visual sensory substitution use functions independently of visual pathway integrity

    Get PDF
    Purpose: Neuronal reorganization after blindness is of critical interest because it has implications for the rational prescription of artificial vision devices. The purpose of this study was to distinguish the microstructural differences between perinatally blind (PB), acquired blind (AB), and normally sighted controls (SCs) and relate these differences to performance on functional tasks using a sensory substitution device (BrainPort). Methods: We enrolled 52 subjects (PB n = 11; AB n = 35; SC n = 6). All subjects spent 15 h undergoing BrainPort device training. Outcomes of light perception, motion, direction, temporal resolution, grating, and acuity were tested at baseline and after training. Twenty-six of the subjects were scanned with a three Tesla MRI scanner for diffusion tensor imaging (DTI), and with a positron emission tomography (PET) scanner for mapping regional brain glucose consumption during sensory substitution function. Non-parametric models were used to analyze fractional anisotropy (FA; a DTI measure of microstructural integrity) of the brain via region-of-interest (ROI) analysis and tract-based spatial statistics (TBSS). Results: At baseline, all subjects performed all tasks at chance level. After training, light perception, time resolution, location and grating acuity tasks improved significantly for all subject groups. ROI and TBSS analyses of FA maps show areas of statistically significant differences (p ≤ 0.025) in the bilateral optic radiations and some visual association connections between all three groups. No relationship was found between FA and functional performance with the BrainPort. Discussion: All subjects showed performance improvements using the BrainPort irrespective of nature and duration of blindness. Definite brain areas with significant microstructural integrity changes exist among PB, AB, and NC, and these variations are most pronounced in the visual pathways. However, the use of sensory substitution devices is feasible irrespective of microstructural integrity of the primary visual pathways between the eye and the brain. Therefore, tongue based devices devices may be usable for a broad array of non-sighted patients. © 2014 Lee, Nau, Laymon, Chan, Rosario and Fisher
    corecore