25 research outputs found

    Human-in-the-Loop Optimization for Deep Stimulus Encoding in Visual Prostheses

    Full text link
    Neuroprostheses show potential in restoring lost sensory function and enhancing human capabilities, but the sensations produced by current devices often seem unnatural or distorted. Exact placement of implants and differences in individual perception lead to significant variations in stimulus response, making personalized stimulus optimization a key challenge. Bayesian optimization could be used to optimize patient-specific stimulation parameters with limited noisy observations, but is not feasible for high-dimensional stimuli. Alternatively, deep learning models can optimize stimulus encoding strategies, but typically assume perfect knowledge of patient-specific variations. Here we propose a novel, practically feasible approach that overcomes both of these fundamental limitations. First, a deep encoder network is trained to produce optimal stimuli for any individual patient by inverting a forward model mapping electrical stimuli to visual percepts. Second, a preferential Bayesian optimization strategy utilizes this encoder to optimize patient-specific parameters for a new patient, using a minimal number of pairwise comparisons between candidate stimuli. We demonstrate the viability of this approach on a novel, state-of-the-art visual prosthesis model. We show that our approach quickly learns a personalized stimulus encoder, leads to dramatic improvements in the quality of restored vision, and is robust to noisy patient feedback and misspecifications in the underlying forward model. Overall, our results suggest that combining the strengths of deep learning and Bayesian optimization could significantly improve the perceptual experience of patients fitted with visual prostheses and may prove a viable solution for a range of neuroprosthetic technologies

    Egocentric Computer Vision and Machine Learning for Simulated Prosthetic Vision

    Get PDF
    Las prótesis visuales actuales son capaces de proporcionar percepción visual a personas con cierta ceguera. Sin pasar por la parte dañada del camino visual, la estimulación eléctrica en la retina o en el sistema nervioso provoca percepciones puntuales conocidas como “fosfenos”. Debido a limitaciones fisiológicas y tecnológicas, la información que reciben los pacientes tiene una resolución muy baja y un campo de visión y rango dinámico reducido afectando seriamente la capacidad de la persona para reconocer y navegar en entornos desconocidos. En este contexto, la inclusión de nuevas técnicas de visión por computador es un tema clave activo y abierto. En esta tesis nos centramos especialmente en el problema de desarrollar técnicas para potenciar la información visual que recibe el paciente implantado y proponemos diferentes sistemas de visión protésica simulada para la experimentación.Primero, hemos combinado la salida de dos redes neuronales convolucionales para detectar bordes informativos estructurales y siluetas de objetos. Demostramos cómo se pueden reconocer rápidamente diferentes escenas y objetos incluso en las condiciones restringidas de la visión protésica. Nuestro método es muy adecuado para la comprensión de escenas de interiores comparado con los métodos tradicionales de procesamiento de imágenes utilizados en prótesis visuales.Segundo, presentamos un nuevo sistema de realidad virtual para entornos de visión protésica simulada más realistas usando escenas panorámicas, lo que nos permite estudiar sistemáticamente el rendimiento de la búsqueda y reconocimiento de objetos. Las escenas panorámicas permiten que los sujetos se sientan inmersos en la escena al percibir la escena completa (360 grados).En la tercera contribución demostramos cómo un sistema de navegación de realidad aumentada para visión protésica ayuda al rendimiento de la navegación al reducir el tiempo y la distancia para alcanzar los objetivos, incluso reduciendo significativamente el número de colisiones de obstáculos. Mediante el uso de un algoritmo de planificación de ruta, el sistema encamina al sujeto a través de una ruta más corta y sin obstáculos. Este trabajo está actualmente bajo revisión.En la cuarta contribución, evaluamos la agudeza visual midiendo la influencia del campo de visión con respecto a la resolución espacial en prótesis visuales a través de una pantalla montada en la cabeza. Para ello, usamos la visión protésica simulada en un entorno de realidad virtual para simular la experiencia de la vida real al usar una prótesis de retina. Este trabajo está actualmente bajo revisión.Finalmente, proponemos un modelo de Spiking Neural Network (SNN) que se basa en mecanismos biológicamente plausibles y utiliza un esquema de aprendizaje no supervisado para obtener mejores algoritmos computacionales y mejorar el rendimiento de las prótesis visuales actuales. El modelo SNN propuesto puede hacer uso de la señal de muestreo descendente de la unidad de procesamiento de información de las prótesis retinianas sin pasar por el análisis de imágenes retinianas, proporcionando información útil a los ciegos. Esté trabajo está actualmente en preparación.<br /

    Machine Learning Methods for Image Analysis in Medical Applications, from Alzheimer\u27s Disease, Brain Tumors, to Assisted Living

    Get PDF
    Healthcare has progressed greatly nowadays owing to technological advances, where machine learning plays an important role in processing and analyzing a large amount of medical data. This thesis investigates four healthcare-related issues (Alzheimer\u27s disease detection, glioma classification, human fall detection, and obstacle avoidance in prosthetic vision), where the underlying methodologies are associated with machine learning and computer vision. For Alzheimer’s disease (AD) diagnosis, apart from symptoms of patients, Magnetic Resonance Images (MRIs) also play an important role. Inspired by the success of deep learning, a new multi-stream multi-scale Convolutional Neural Network (CNN) architecture is proposed for AD detection from MRIs, where AD features are characterized in both the tissue level and the scale level for improved feature learning. Good classification performance is obtained for AD/NC (normal control) classification with test accuracy 94.74%. In glioma subtype classification, biopsies are usually needed for determining different molecular-based glioma subtypes. We investigate non-invasive glioma subtype prediction from MRIs by using deep learning. A 2D multi-stream CNN architecture is used to learn the features of gliomas from multi-modal MRIs, where the training dataset is enlarged with synthetic brain MRIs generated by pairwise Generative Adversarial Networks (GANs). Test accuracy 88.82% has been achieved for IDH mutation (a molecular-based subtype) prediction. A new deep semi-supervised learning method is also proposed to tackle the problem of missing molecular-related labels in training datasets for improving the performance of glioma classification. In other two applications, we also address video-based human fall detection by using co-saliency-enhanced Recurrent Convolutional Networks (RCNs), as well as obstacle avoidance in prosthetic vision by characterizing obstacle-related video features using a Spiking Neural Network (SNN). These investigations can benefit future research, where artificial intelligence/deep learning may open a new way for real medical applications

    Multisensory Approaches to Restore Visual Functions

    Get PDF

    Computational Approaches to Explainable Artificial Intelligence:Advances in Theory, Applications and Trends

    Get PDF
    Deep Learning (DL), a groundbreaking branch of Machine Learning (ML), has emerged as a driving force in both theoretical and applied Artificial Intelligence (AI). DL algorithms, rooted in complex and non-linear artificial neural systems, excel at extracting high-level features from data. DL has demonstrated human-level performance in real-world tasks, including clinical diagnostics, and has unlocked solutions to previously intractable problems in virtual agent design, robotics, genomics, neuroimaging, computer vision, and industrial automation. In this paper, the most relevant advances from the last few years in Artificial Intelligence (AI) and several applications to neuroscience, neuroimaging, computer vision, and robotics are presented, reviewed and discussed. In this way, we summarize the state-of-the-art in AI methods, models and applications within a collection of works presented at the 9 International Conference on the Interplay between Natural and Artificial Computation (IWINAC). The works presented in this paper are excellent examples of new scientific discoveries made in laboratories that have successfully transitioned to real-life applications

    FPGA design and implementation of a framework for optogenetic retinal prosthesis

    Get PDF
    PhD ThesisThere are 285 million people worldwide with a visual impairment, 39 million of whom are completely blind and 246 million partially blind, known as low vision patients. In the UK and other developed countries of the west, retinal dystrophy diseases represent the primary cause of blindness, especially Age Related Macular Degeneration (AMD), diabetic retinopathy and Retinitis Pigmentosa (RP). There are various treatments and aids that can help these visual disorders, such as low vision aids, gene therapy and retinal prosthesis. Retinal prostheses consist of four main stages: the input stage (Image Acquisition), the high level processing stage (Image preparation and retinal encoding), low level processing stage (Stimulation controller) and the output stage (Image displaying on the opto-electronic micro-LEDs array). Up to now, a limited number of full hardware implementations have been available for retinal prosthesis. In this work, a photonic stimulation controller was designed and implemented. The main rule of this controller is to enhance framework results in terms of power and time. It involves, first, an even power distributor, which was used to evenly distribute the power through image sub-frames, to avoid a large surge of power, especially with large arrays. Therefore, the overall framework power results are improved. Second, a pulse encoder was used to select different modes of operation for the opto-electronic micro-LEDs array, and as a result of this the overall time for the framework was improved. The implementation is completed using reconfigurable hardware devices, i.e. Field Programmable Gate Arrays (FPGAs), to achieve high performance at an economical price. Moreover, this FPGA-based framework for an optogenetic retinal prosthesis aims to control the opto-electronic micro-LED array in an efficient way, and to interface and link between the opto-electronic micro-LED array hardware architecture and the previously developed high level retinal prosthesis image processing algorithms.University of Jorda

    Computational approaches to Explainable Artificial Intelligence: Advances in theory, applications and trends

    Get PDF
    Deep Learning (DL), a groundbreaking branch of Machine Learning (ML), has emerged as a driving force in both theoretical and applied Artificial Intelligence (AI). DL algorithms, rooted in complex and non-linear artificial neural systems, excel at extracting high-level features from data. DL has demonstrated human-level performance in real-world tasks, including clinical diagnostics, and has unlocked solutions to previously intractable problems in virtual agent design, robotics, genomics, neuroimaging, computer vision, and industrial automation. In this paper, the most relevant advances from the last few years in Artificial Intelligence (AI) and several applications to neuroscience, neuroimaging, computer vision, and robotics are presented, reviewed and discussed. In this way, we summarize the state-of-the-art in AI methods, models and applications within a collection of works presented at the 9 International Conference on the Interplay between Natural and Artificial Computation (IWINAC). The works presented in this paper are excellent examples of new scientific discoveries made in laboratories that have successfully transitioned to real-life applications

    Refined electrophysiological recording and processing of neural signals from the retina and ascending visual pathways

    Get PDF
    The purpose of this thesis was the development of refined methods for recording and processing of neural signals of the retina and ascending visual pathways. The first chapter describes briefly the fundamentals of the human visual system and the basics of the functional testing of the retina and the visual pathways. The second and third chapters are dedicated to the processing of visual electrophysiological data using the newly developed software ERG Explorer, and present a proposal for an open and standardized data format, ElVisML, for future proof storage of visual electrophysiological data. The fourth chapter describes the development and application of two novel electrodes: First a contact lens electrode for the recording of electrical potentials of the ciliary muscle during accommodation, and second, the marble electrode, which is made of a super-absorbant polymer and allows for a preparation-free recording of visual evoked potentials. Results obtained in studies using the both electrodes are presented. The fifths and last chapter of the thesis presents the results from four studies within the field of visual electrophysiology. The first study examines the ophthalmological assessment of cannabis-induced perception disorder using electrophysiological methods. The second study presents a refined method for the objective assessment of the visual acuity using visual evoked potentials and introduces therefore, a refined stimulus paradigm and a novel method for the analysis of the sweep VEP. The third study presents the results of a newly developed stimulus design for full-field electrophysiology, which allows to assess previously non-recordable electroretinograms. The last study describes a relation of the spatial frequency of a visual stimulus to the amplitudes of visual evoked potentials in comparison to the BOLD response obtained using functional near-infrared spectroscopy and functional magnetic resonance imaging
    corecore