523 research outputs found

    High accuracy decoding of dynamical motion from a large retinal population

    Get PDF
    Motion tracking is a challenge the visual system has to solve by reading out the retinal population. Here we recorded a large population of ganglion cells in a dense patch of salamander and guinea pig retinas while displaying a bar moving diffusively. We show that the bar position can be reconstructed from retinal activity with a precision in the hyperacuity regime using a linear decoder acting on 100+ cells. The classical view would have suggested that the firing rates of the cells form a moving hill of activity tracking the bar's position. Instead, we found that ganglion cells fired sparsely over an area much larger than predicted by their receptive fields, so that the neural image did not track the bar. This highly redundant organization allows for diverse collections of ganglion cells to represent high-accuracy motion information in a form easily read out by downstream neural circuits.Comment: 23 pages, 7 figure

    Medical image enhancement

    Get PDF
    Each image acquired from a medical imaging system is often part of a two-dimensional (2-D) image set whose total presents a three-dimensional (3-D) object for diagnosis. Unfortunately, sometimes these images are of poor quality. These distortions cause an inadequate object-of-interest presentation, which can result in inaccurate image analysis. Blurring is considered a serious problem. Therefore, ā€œdeblurringā€ an image to obtain better quality is an important issue in medical image processing. In our research, the image is initially decomposed. Contrast improvement is achieved by modifying the coefficients obtained from the decomposed image. Small coefficient values represent subtle details and are amplified to improve the visibility of the corresponding details. The stronger image density variations make a major contribution to the overall dynamic range, and have large coefficient values. These values can be reduced without much information loss

    Rapid mapping of visual receptive fields by filtered back-projection: application to multi-neuronal electrophysiology and imaging

    Get PDF
    Neurons in the visual system vary widely in the spatiotemporal properties of their receptive fields (RFs), and understanding these variations is key to elucidating how visual information is processed. We present a new approach for mapping RFs based on the filtered back projection (FBP), an algorithm used for tomographic reconstructions. To estimate RFs, a series of bars were flashed across the retina at pseudoā€random positions and at a minimum of five orientations. We apply this method to retinal neurons and show that it can accurately recover the spatial RF and impulse response of ganglion cells recorded on a multiā€electrode array. We also demonstrate its utility for in vivo imaging by mapping the RFs of an array of bipolar cell synapses expressing a genetically encoded Ca2+ indicator. We find that FBP offers several advantages over the commonly used spikeā€triggered average (STA): (i) ON and OFF components of a RF can be separated; (ii) the impulse response can be reconstructed at sample rates of 125 Hz, rather than the refresh rate of a monitor; (iii) FBP reveals the response properties of neurons that are not evident using STA, including those that display orientation selectivity, or fire at low mean spike rates; and (iv) the FBP method is fast, allowing the RFs of all the bipolar cell synaptic terminals in a field of view to be reconstructed in under 4 min. Use of the FBP will benefit investigations of the visual system that employ electrophysiology or optical reporters to measure activity across populations of neurons

    Neural computation of visual imaging based on Kronecker product in the primary visual cortex

    Get PDF
    Background: What kind of neural computation is actually performed by the primary visual cortex and how is this represented mathematically at the system level? It is an important problem in the visual information processing, but has not been well answered. In this paper, according to our understanding of retinal organization and parallel multi-channel topographical mapping between retina and primary visual cortex V1, we divide an image into orthogonal and orderly array of image primitives (or patches), in which each patch will evoke activities of simple cells in V1. From viewpoint of information processing, this activated process, essentially, involves optimal detection and optimal matching of receptive fields of simple cells with features contained in image patches. For the reconstruction of the visual image in the visual cortex V1 based on the principle of minimum mean squares error, it is natural to use the inner product expression in neural computation, which then is transformed into matrix form. Results: The inner product is carried out by using Kronecker product between patches and function architecture (or functional column) in localized and oriented neural computing. Compared with Fourier Transform, the mathematical description of Kronecker product is simple and intuitive, so is the algorithm more suitable for neural computation of visual cortex V1. Results of computer simulation based on two-dimensional Gabor pyramid wavelets show that the theoretical analysis and the proposed model are reasonable. Conclusions: Our results are: 1. The neural computation of the retinal image in cortex V1 can be expressed to Kronecker product operation and its matrix form, this algorithm is implemented by the inner operation between retinal image primitives and primary visual cortex's column. It has simple, efficient and robust features, which is, therefore, such a neural algorithm, which can be completed by biological vision. 2. It is more suitable that the function of cortical column in cortex V1 is considered as the basic unit of visual image processing (such unit can implement basic multiplication of visual primitives, such as contour, line, and edge), rather than a set of tiled array filter. Fourier Transformation is replaced with Kronecker product, which greatly reduces the computational complexity. The neurobiological basis of this idea is that a visual image can be represented as a linear combination of orderly orthogonal primitive image containing some local feature. In the visual pathway, the image patches are topographically mapped onto cortex V1 through parallel multi-channels and then are processed independently by functional columns. Clearly, the above new perspective has some reference significance to exploring the neural mechanisms on the human visual information processing.http://gateway.webofknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcApp=PARTNER_APP&SrcAuth=LinksAMR&KeyUT=WOS:000277524600002&DestLinkType=FullRecord&DestApp=ALL_WOS&UsrCustomerID=8e1609b174ce4e31116a60747a720701NeurosciencesSCI(E)0ARTICLEnull1

    Information recovery from rank-order encoded images

    Get PDF
    The time to detection of a visual stimulus by the primate eye is recorded at 100 ā€“ 150ms. This near instantaneous recognition is in spite of the considerable processing required by the several stages of the visual pathway to recognise and react to a visual scene. How this is achieved is still a matter of speculation. Rank-order codes have been proposed as a means of encoding by the primate eye in the rapid transmission of the initial burst of information from the sensory neurons to the brain. We study the efficiency of rank-order codes in encoding perceptually-important information in an image. VanRullen and Thorpe built a model of the ganglion cell layers of the retina to simulate and study the viability of rank-order as a means of encoding by retinal neurons. We validate their model and quantify the information retrieved from rank-order encoded images in terms of the visually-important information recovered. Towards this goal, we apply the ā€˜perceptual information preservation algorithmā€™, proposed by Petrovic and Xydeas after slight modification. We observe a low information recovery due to losses suffered during the rank-order encoding and decoding processes. We propose to minimise these losses to recover maximum information in minimum time from rank-order encoded images. We first maximise information recovery by using the pseudo-inverse of the filter-bank matrix to minimise losses during rankorder decoding. We then apply the biological principle of lateral inhibition to minimise losses during rank-order encoding. In doing so, we propose the Filteroverlap Correction algorithm. To test the perfomance of rank-order codes in a biologically realistic model, we design and simulate a model of the foveal-pit ganglion cells of the retina keeping close to biological parameters. We use this as a rank-order encoder and analyse its performance relative to VanRullen and Thorpeā€™s retinal model

    Models of learning in the visual system: dependence on retinal eccentricity

    Get PDF
    In the primary visual cortex of primates relatively more space is devoted to the representation of the central visual field in comparison to the representation of the peripheral visual field. Experimentally testable theories about the factors and mechanisms which may have determined this inhomogeneous mapping may provide valuable insights into general processing principles in the visual system. Therefore, I investigated to which visual situations this inhomogeneous representation of the visual field is well adapted, and which mechanisms could support its refinement and stabilization during individual development. Furthermore, I studied possible functional consequences of the inhomogeneous representation for visual processing at central and peripheral locations of the visual field. Vision plays an important role during navigation. Thus, visual processing should be well adapted to self-motion. Therefore, I assumed that spatially inhomogeneous retinal velocity distributions, caused by static objects during self-motion along the direction of gaze, are transformed on average into spatially homogeneous cortical velocity distributions. This would have the advantage that the cortical mechanisms, concerned with the processing of self-motion, can be identical in their spatial and temporal properties across the representation of the whole visual field. This is the case if the arrangement of objects relative to the observer corresponds to an ellipsoid with the observer in its center. I used the resulting flow field to train a network model of pulse coding neurons with a Hebbian learning rule. The distribution of the learned receptive fields is in agreement with the inhomogeneous cortical representation of the visual field. These results suggest that self motion may have played an important role in the evolution of the visual system and that the inhomogeneous cortical representation of the visual field can be refined and stabilized by Hebbian learning mechanisms during ontogenesis under natural viewing conditions. In addition to the processing of self-motion, an important task of the visual system is the grouping and segregation of local features within a visual scene into coherent objects. Therefore, I asked how the corresponding mechanisms depend on the represented position of the visual field. It is assumed that neuronal connections within the primary visual cortex subserve this grouping process. These connections develop after eye-opening in dependence on the visual input. How does the lateral connectivity depend on the represented position of the visual field? With increasing eccentricity, primary cortical receptive fields become larger and the cortical magnification of the visual field declines. Therefore, I investigated the spatial statistics of real-world scenes with respect to the spatial filter-properties of cortical neurons at different locations of the visual field. I show that correlations between collinearly arranged filters of the same size and orientation increase with increasing filter size. However, in distances relative to the size of the filters, collinear correlations decline more steeply with increasing distance for larger filters. This provides evidence against a homogeneous cortical connectivity across the whole visual field with respect to the coding of spatial object properties. Two major retino-cortical pathways are the magnocellular (M) and the parvocellular (P) pathways. While neurons along the M-pathway display temporal bandpass characteristics, neurons along the P-pathway show temporal lowpass characteristics. The ratio of P- to M-cells is not constant across the whole visual field, but declines with increasing retinal eccentricity. Therefore, I investigated how the different temporal response-properties of neurons of the M- and the P-pathways influence self-organization in the visual cortex, and discussed possible consequences for the coding of visual objects at different locations of the visual field. Specifically, I studied the influence of stimulus-motion on the self-organization of lateral connections in a network-model of spiking neurons with Hebbian learning. Low stimulus velocities lead to horizontal connections well adapted to the coding of the spatial structure within the visual input, while higher stimulus velocities lead to connections which subserve the coding of the stimulus movement direction. This suggests that the temporal lowpass properties of P-neurons subserve the coding of spatial stimulus attributes (form) in the visual cortex, while the temporal bandpass properties of M-neurons support the coding of spatio-temporal stimulus attributes (movement direction). Hence, the central representation of the visual field may be well adapted to the encoding of spatial object properties due to the strong contribution of P-neurons. The peripheral representation may be better adapted to the processing of motion

    Deep Cellular Recurrent Neural Architecture for Efficient Multidimensional Time-Series Data Processing

    Get PDF
    Efficient processing of time series data is a fundamental yet challenging problem in pattern recognition. Though recent developments in machine learning and deep learning have enabled remarkable improvements in processing large scale datasets in many application domains, most are designed and regulated to handle inputs that are static in time. Many real-world data, such as in biomedical, surveillance and security, financial, manufacturing and engineering applications, are rarely static in time, and demand models able to recognize patterns in both space and time. Current machine learning (ML) and deep learning (DL) models adapted for time series processing tend to grow in complexity and size to accommodate the additional dimensionality of time. Specifically, the biologically inspired learning based models known as artificial neural networks that have shown extraordinary success in pattern recognition, tend to grow prohibitively large and cumbersome in the presence of large scale multi-dimensional time series biomedical data such as EEG. Consequently, this work aims to develop representative ML and DL models for robust and efficient large scale time series processing. First, we design a novel ML pipeline with efficient feature engineering to process a large scale multi-channel scalp EEG dataset for automated detection of epileptic seizures. With the use of a sophisticated yet computationally efficient time-frequency analysis technique known as harmonic wavelet packet transform and an efficient self-similarity computation based on fractal dimension, we achieve state-of-the-art performance for automated seizure detection in EEG data. Subsequently, we investigate the development of a novel efficient deep recurrent learning model for large scale time series processing. For this, we first study the functionality and training of a biologically inspired neural network architecture known as cellular simultaneous recurrent neural network (CSRN). We obtain a generalization of this network for multiple topological image processing tasks and investigate the learning efficacy of the complex cellular architecture using several state-of-the-art training methods. Finally, we develop a novel deep cellular recurrent neural network (CDRNN) architecture based on the biologically inspired distributed processing used in CSRN for processing time series data. The proposed DCRNN leverages the cellular recurrent architecture to promote extensive weight sharing and efficient, individualized, synchronous processing of multi-source time series data. Experiments on a large scale multi-channel scalp EEG, and a machine fault detection dataset show that the proposed DCRNN offers state-of-the-art recognition performance while using substantially fewer trainable recurrent units

    Neural activity classification with machine learning models trained on interspike interval series data

    Full text link
    The flow of information through the brain is reflected by the activity patterns of neural cells. Indeed, these firing patterns are widely used as input data to predictive models that relate stimuli and animal behavior to the activity of a population of neurons. However, relatively little attention was paid to single neuron spike trains as predictors of cell or network properties in the brain. In this work, we introduce an approach to neuronal spike train data mining which enables effective classification and clustering of neuron types and network activity states based on single-cell spiking patterns. This approach is centered around applying state-of-the-art time series classification/clustering methods to sequences of interspike intervals recorded from single neurons. We demonstrate good performance of these methods in tasks involving classification of neuron type (e.g. excitatory vs. inhibitory cells) and/or neural circuit activity state (e.g. awake vs. REM sleep vs. nonREM sleep states) on an open-access cortical spiking activity dataset
    • ā€¦
    corecore