24 research outputs found

    MNIST main params calculated values

    No full text
    A data file containing calculated values measuring quantities such as reconstruction performance, average neuronal activity, etc for different points in the training sequence. These values were used directly to generate the figures in the manuscript

    MNIST non-sparse params network over time

    No full text
    The data file where network state is recorded throughout the simulation training period, for the MNIST parameters used for the Figure 10c in the paper

    MNIST very sparse params network over time

    No full text
    The data file where network state is recorded throughout the simulation training period, for the very sparse MNIST parameters used for Figure 10a in the paper

    MNIST very sparse params calculated values

    No full text
    A data file containing calculated values measuring quantities such as reconstruction performance, average neuronal activity, etc for different points in the training sequence. These values were used directly to generate Figure 10a in the manuscript

    Natural images very sparse params network over time

    No full text
    The data file where network state is recorded throughout the simulation training period, for the very sparse natural image parameters used for the Figure 10b in the paper

    Mirrored STDP Implements Autoencoder Learning in a Network of Spiking Neurons

    No full text
    <div><p>The autoencoder algorithm is a simple but powerful unsupervised method for training neural networks. Autoencoder networks can learn sparse distributed codes similar to those seen in cortical sensory areas such as visual area V1, but they can also be stacked to learn increasingly abstract representations. Several computational neuroscience models of sensory areas, including Olshausen & Field’s Sparse Coding algorithm, can be seen as autoencoder variants, and autoencoders have seen extensive use in the machine learning community. Despite their power and versatility, autoencoders have been difficult to implement in a biologically realistic fashion. The challenges include their need to calculate differences between two neuronal activities and their requirement for learning rules which lead to identical changes at feedforward and feedback connections. Here, we study a biologically realistic network of integrate-and-fire neurons with anatomical connectivity and synaptic plasticity that closely matches that observed in cortical sensory areas. Our choice of synaptic plasticity rules is inspired by recent experimental and theoretical results suggesting that learning at feedback connections may have a different form from learning at feedforward connections, and our results depend critically on this novel choice of plasticity rules. Specifically, we propose that plasticity rules at feedforward versus feedback connections are temporally opposed versions of spike-timing dependent plasticity (STDP), leading to a symmetric combined rule we call Mirrored STDP (mSTDP). We show that with mSTDP, our network follows a learning rule that approximately minimizes an autoencoder loss function. When trained with whitened natural image patches, the learned synaptic weights resemble the receptive fields seen in V1. Our results use realistic synaptic plasticity rules to show that the powerful autoencoder learning algorithm could be within the reach of real biological networks.</p></div

    Natural images main params network over time

    No full text
    The data file where network state is recorded throughout the simulation training period, for the natural image parameters used for the main results in the paper

    Architecture of the model network and stimulus preprocessing.

    No full text
    <p>Architecture of the model network and network activity. <b>a:</b> Architecture of the model network and stimulus preprocessing. The final preprocessing step of separating the stimulus into two non-negative “ON” and “OFF” populations allows the visible layer activities to remain positive. <b>b:</b> Example activity of two neurons in the spiking network. In response to external stimulus onset (gray bar), the visible neuron <i>i</i> fires several spikes in the “initial bout” of activity. After a delay, feedforward excitation causes the hidden neuron <i>j</i> to fires spikes in the “intermediate bout”. After another delay, feedback causes the visible neuron to spike in the “final bout”, the network’s attempted reconstruction. The average time between spikes in the initial and intermediate bouts and intermediate and final bouts are given by Δ<i>t</i><sub>1</sub> and Δ<i>t</i><sub>2</sub>, respectively. Every pair of visible and hidden spikes contributes to plasticity, dependent on their relative times. Learning from two example dotted spikes is described in <a href="http://www.ploscompbiol.org/article/info:doi/10.1371/journal.pcbi.1004566#pcbi.1004566.g002" target="_blank">Fig 2</a>. <b>c:</b> Biological feedforward and feedback connections are physically distinct. For the feedforward connection, the visible neuron is pre-synaptic, the hidden neuron is post-synaptic, and the synapse lies close to the hidden neuron’s cell body. For the feedback connection, the hidden neuron is pre-synaptic, the visible neuron post-synaptic, and the synapse is far out on the visible neuron’s dendritic tree.</p

    Archive of code and parameter files

    No full text
    A zip file containing the code used to run the simulations, as well as the specific parameter files used to generate the data shown in the paper

    Feedforward weights after training for the MNIST and natural image patch datasets.

    No full text
    <p><b>a:</b> Weights learned from the MNIST dataset. Each square in the grid represents the incoming weights to a single hidden unit; weights to the first 100 hidden units are shown. Weights from visible neurons which receive OFF inputs are subtracted from the weights from visible neurons which receive ON inputs. Then, weights to each neuron are normalized by dividing by the largest absolute value. <b>b:</b> Same as (a), but for the natural image patch dataset.</p
    corecore