125 research outputs found

    Sparse Coding Predicts Optic Flow Specificities of Zebrafish Pretectal Neurons

    Full text link
    Zebrafish pretectal neurons exhibit specificities for large-field optic flow patterns associated with rotatory or translatory body motion. We investigate the hypothesis that these specificities reflect the input statistics of natural optic flow. Realistic motion sequences were generated using computer graphics simulating self-motion in an underwater scene. Local retinal motion was estimated with a motion detector and encoded in four populations of directionally tuned retinal ganglion cells, represented as two signed input variables. This activity was then used as input into one of two learning networks: a sparse coding network (competitive learning) and backpropagation network (supervised learning). Both simulations develop specificities for optic flow which are comparable to those found in a neurophysiological study (Kubo et al. 2014), and relative frequencies of the various neuronal responses are best modeled by the sparse coding approach. We conclude that the optic flow neurons in the zebrafish pretectum do reflect the optic flow statistics. The predicted vectorial receptive fields show typical optic flow fields but also "Gabor" and dipole-shaped patterns that likely reflect difference fields needed for reconstruction by linear superposition.Comment: Published Conference Paper from ICANN 2018, Rhode

    Nonstimulated early visual areas carry information about surrounding context

    Get PDF
    Even within the early sensory areas, the majority of the input to any given cortical neuron comes from other cortical neurons. To extend our knowledge of the contextual information that is transmitted by such lateral and feedback connections, we investigated how visually nonstimulated regions in primary visual cortex (V1) and visual area V2 are influenced by the surrounding context. We used functional magnetic resonance imaging (fMRI) and pattern-classification methods to show that the cortical representation of a nonstimulated quarter-field carries information that can discriminate the surrounding visual context. We show further that the activity patterns in these regions are significantly related to those observed with feed-forward stimulation and that these effects are driven primarily by V1. These results thus demonstrate that visual context strongly influences early visual areas even in the absence of differential feed-forward thalamic stimulation

    Reconciling Predictive Coding and Biased Competition Models of Cortical Function

    Get PDF
    A simple variation of the standard biased competition model is shown, via some trivial mathematical manipulations, to be identical to predictive coding. Specifically, it is shown that a particular implementation of the biased competition model, in which nodes compete via inhibition that targets the inputs to a cortical region, is mathematically equivalent to the linear predictive coding model. This observation demonstrates that these two important and influential rival theories of cortical function are minor variations on the same underlying mathematical model

    Sparse Coding and Autoencoders

    Full text link
    In "Dictionary Learning" one tries to recover incoherent matrices ARn×hA^* \in \mathbb{R}^{n \times h} (typically overcomplete and whose columns are assumed to be normalized) and sparse vectors xRhx^* \in \mathbb{R}^h with a small support of size hph^p for some 0<p<10 <p < 1 while having access to observations yRny \in \mathbb{R}^n where y=Axy = A^*x^*. In this work we undertake a rigorous analysis of whether gradient descent on the squared loss of an autoencoder can solve the dictionary learning problem. The "Autoencoder" architecture we consider is a RnRn\mathbb{R}^n \rightarrow \mathbb{R}^n mapping with a single ReLU activation layer of size hh. Under very mild distributional assumptions on xx^*, we prove that the norm of the expected gradient of the standard squared loss function is asymptotically (in sparse code dimension) negligible for all points in a small neighborhood of AA^*. This is supported with experimental evidence using synthetic data. We also conduct experiments to suggest that AA^* is a local minimum. Along the way we prove that a layer of ReLU gates can be set up to automatically recover the support of the sparse codes. This property holds independent of the loss function. We believe that it could be of independent interest.Comment: In this new version of the paper with a small change in the distributional assumptions we are actually able to prove the asymptotic criticality of a neighbourhood of the ground truth dictionary for even just the standard squared loss of the ReLU autoencoder (unlike the regularized loss in the older version
    corecore