71 research outputs found

    Computational modelling of salamander retinal ganglion cells using machine learning approaches

    Get PDF
    Artificial vision using computational models that can mimic biological vision is an area of ongoing research. One of the main themes within this research is the study of the retina and in particular, retinal ganglion cells which are responsible for encoding the visual stimuli. A common approach to modelling the internal processes of retinal ganglion cells is the use of a linear - non-linear cascade model, which models the cell's response using a linear filter followed by a static non-linearity. However, the resulting model is generally restrictive as it is often a poor estimator of the neuron's response. In this paper we present an alternative to the linear - non-linear model by modelling retinal ganglion cells using a number of machine learning techniques which have a proven track record for learning complex non-linearities in many different domains. A comparison of the model predicted spike rate shows that the machine learning models perform better than the standard linear - non-linear approach in the case of temporal white noise stimuli

    Bio-Inspired Approach to Modelling Retinal Ganglion Cells using System Identification Techniques

    Get PDF
    The processing capabilities of biological vision systems are still vastly superior to artificial vision, even though this has been an active area of research for over half a century. Current artificial vision techniques integrate many insights from biology yet they remain far-off the capabilities of animals and humans in terms of speed, power, and performance. A key aspect to modeling the human visual system is the ability to accurately model the behavior and computation within the retina. In particular, we focus on modeling the retinal ganglion cells (RGCs) as they convey the accumulated data of real world images as action potentials onto the visual cortex via the optic nerve. Computational models that approximate the processing that occurs within RGCs can be derived by quantitatively fitting the sets of physiological data using an input–output analysis where the input is a known stimulus and the output is neuronal recordings. Currently, these input–output responses are modeled using computational combinations of linear and nonlinear models that are generally complex and lack any relevance to the underlying biophysics. In this paper, we illustrate how system identification techniques, which take inspiration from biological systems, can accurately model retinal ganglion cell behavior, and are a viable alternative to traditional linear–nonlinear approaches

    Temporal Coding Model of Spiking Output for Retinal Ganglion Cells

    Get PDF

    Advancing models of the visual system using biologically plausible unsupervised spiking neural networks

    Get PDF
    Spikes are thought to provide a fundamental unit of computation in the nervous system. The retina is known to use the relative timing of spikes to encode visual input, whereas primary visual cortex (V1) exhibits sparse and irregular spiking activity – but what do these different spiking patterns represent about sensory stimuli? To address this question, I set out to model the retina and V1 using a biologically-realistic spiking neural network (SNN), exploring the idea that temporal prediction underlies the sensory transformation of natural inputs. Firstly, I trained a recurrently-connected SNN of excitatory and inhibitory units to predict the sensory future in natural movies under metabolic-like constraints. This network exhibited V1-like spike statistics, simple and complex cell-like tuning, and - advancing prior studies - key physiological and tuning differences between excitatory and inhibitory neurons. Secondly, I modified this spiking network to model the retina to explore its role in visual processing. I found the model optimized for efficient prediction to capture retina-like receptive fields and - in contrast to previous studies - various retinal phenomena, such as latency coding, response omissions, and motion-tuning properties. Notably, the temporal prediction model also more accurately predicts retinal ganglion cell responses to natural images and movies across various animal species. Lastly, I developed a new method to accelerate the simulation and training of SNNs, obtaining a 10-50 times speedup, with performance on a par with the standard training approach on supervised classification benchmarks and for fitting electrophysiological recordings of cortical neurons. The retina and V1 models lay the foundation for developing normative models of increasing biological realism and link sensory processing to spiking activity, suggesting that temporal prediction is an underlying function of visual processing. This is complemented by a new approach to drastically accelerate computational research using SNNs

    Deep learning models of biological visual information processing

    Get PDF
    Improved computational models of biological vision can shed light on key processes contributing to the high accuracy of the human visual system. Deep learning models, which extract multiple layers of increasingly complex features from data, achieved recent breakthroughs on visual tasks. This thesis proposes such flexible data-driven models of biological vision and also shows how insights regarding biological visual processing can lead to advances within deep learning. To harness the potential of deep learning for modelling the retina and early vision, this work introduces a new dataset and a task simulating an early visual processing function and evaluates deep belief networks (DBNs) and deep neural networks (DNNs) on this input. The models are shown to learn feature detectors similar to retinal ganglion and V1 simple cells and execute early vision tasks. To model high-level visual information processing, this thesis proposes novel deep learning architectures and training methods. Biologically inspired Gaussian receptive field constraints are imposed on restricted Boltzmann machines (RBMs) to improve the fidelity of the data representation to encodings extracted by visual processing neurons. Moreover, concurrently with learning local features, the proposed local receptive field constrained RBMs (LRF-RBMs) automatically discover advantageous non-uniform feature detector placements from data. Following the hierarchical organisation of the visual cortex, novel LRF-DBN and LRF-DNN models are constructed using LRF-RBMs with gradually increasing receptive field sizes to extract consecutive layers of features. On a challenging face dataset, unlike DBNs, LRF-DBNs learn a feature hierarchy exhibiting hierarchical part-based composition. Also, the proposed deep models outperform DBNs and DNNs on face completion and dimensionality reduction, thereby demonstrating the strength of methods inspired by biological visual processing

    Deep learning models of biological visual information processing

    Get PDF
    Improved computational models of biological vision can shed light on key processes contributing to the high accuracy of the human visual system. Deep learning models, which extract multiple layers of increasingly complex features from data, achieved recent breakthroughs on visual tasks. This thesis proposes such flexible data-driven models of biological vision and also shows how insights regarding biological visual processing can lead to advances within deep learning. To harness the potential of deep learning for modelling the retina and early vision, this work introduces a new dataset and a task simulating an early visual processing function and evaluates deep belief networks (DBNs) and deep neural networks (DNNs) on this input. The models are shown to learn feature detectors similar to retinal ganglion and V1 simple cells and execute early vision tasks. To model high-level visual information processing, this thesis proposes novel deep learning architectures and training methods. Biologically inspired Gaussian receptive field constraints are imposed on restricted Boltzmann machines (RBMs) to improve the fidelity of the data representation to encodings extracted by visual processing neurons. Moreover, concurrently with learning local features, the proposed local receptive field constrained RBMs (LRF-RBMs) automatically discover advantageous non-uniform feature detector placements from data. Following the hierarchical organisation of the visual cortex, novel LRF-DBN and LRF-DNN models are constructed using LRF-RBMs with gradually increasing receptive field sizes to extract consecutive layers of features. On a challenging face dataset, unlike DBNs, LRF-DBNs learn a feature hierarchy exhibiting hierarchical part-based composition. Also, the proposed deep models outperform DBNs and DNNs on face completion and dimensionality reduction, thereby demonstrating the strength of methods inspired by biological visual processing
    • …
    corecore