37 research outputs found

    Irrelevance by inhibition: Learning, computation, and implications for schizophrenia

    No full text
    <div><p>Symptoms of schizophrenia may arise from a failure of cortical circuits to filter-out irrelevant inputs. Schizophrenia has also been linked to disruptions in cortical inhibitory interneurons, consistent with the possibility that in the normally functioning brain, these cells are in some part responsible for determining which sensory inputs are relevant versus irrelevant. Here, we develop a neural network model that demonstrates how the cortex may learn to ignore irrelevant inputs through plasticity processes affecting inhibition. The model is based on the proposal that the amount of excitatory output from a cortical circuit encodes the expected magnitude of reward or punishment (“relevance”), which can be trained using a temporal difference learning mechanism acting on feedforward inputs to inhibitory interneurons. In the model, irrelevant and blocked stimuli drive lower levels of excitatory activity compared with novel and relevant stimuli, and this difference in activity levels is lost following disruptions to inhibitory units. When excitatory units are connected to a competitive-learning output layer with a threshold, the relevance code can be shown to “gate” both learning and behavioral responses to irrelevant stimuli. Accordingly, the combined network is capable of recapitulating published experimental data linking inhibition in frontal cortex with fear learning and expression. Finally, the model demonstrates how relevance learning can take place in parallel with other types of learning, through plasticity rules involving inhibitory and excitatory components, respectively. Altogether, this work offers a theory of how the cortex learns to selectively inhibit inputs, providing insight into how relevance-assignment problems may emerge in schizophrenia.</p></div

    Simulation of experimental data on role of feedforward inhibition in freezing behavior.

    No full text
    <p>(<b>A</b>) Experimental data illustrating the effects of optogenetic inhibition (“ArchT”) or excitation (“ChR2”) of medial prefrontal cortex PV+ inhibitory neurons (reproduced by hand from [<a href="http://www.ploscompbiol.org/article/info:doi/10.1371/journal.pcbi.1006315#pcbi.1006315.ref044" target="_blank">44</a>]). Inhibitory neuron inhibition was performed both before conditioning (“Base”) and following conditioning and extinction (CS+ & light). As well, inhibitory neuron excitation was performed following conditioning (right side, CS+ & light). (<b>B</b>) Replication of general patterns of inhbitory neuron manipulations in the model, substituting -20% inhibition for “ArchT” and +10% excitation (i.e., increased <i><b>W</b></i><sup><b>I</b><b>→</b><b>E</b></sup> weights by 10%) for “ChR2”.</p

    Simulation of experimental data on rodent latent inhibition.

    No full text
    <p>(<b>A</b>) Fear expression in rats in a latent inhibition paradigm in which animals were either pre-exposed (black bars) or not pre-exposed (white bars) to the conditioned stimulus, and treated with either saline, a GABA-A antagonist during conditioning, or a GABA-A antagonist during testing (reproduced by hand from [<a href="http://www.ploscompbiol.org/article/info:doi/10.1371/journal.pcbi.1006315#pcbi.1006315.ref043" target="_blank">43</a>]). (<b>B</b>) Data from the model simulation of the same latent inhibition paradigm. Bars show median activity across 30 model runs (errorbars are 90% CI generated by bootstrapping 5-sample medians), of the average Amygdala layer activity during the final (test) stimulus presentations. (<b>C</b>) Cortex excitatory unit activity in response to stimuli across trials in an example run of the model. The downward curve during the first 30 presentations shows that the network learned to ignore the CS in all simulations with CS pre-exposures. The activity during conditioning and test periods shows the combined impact of relevance learning and impaired inhibition. (<b>D</b>) Amygdala activity levels in an example run of the model over trials with Conditioning and Testing epochs (as in the right-side panel of part C). Test period activity shows a pre-exposure effect in the control condition (solid versus dashed gray lines). This is amplified when inhibition is disrupted during conditioning (solid versus dashed red lines) but was lost when inhibition was disrupted during test (solid versus dashed blue lines).</p

    Impaired learning to ignore following disruption to inhibition.

    No full text
    <p>(<b>A</b>) Illustration of the “learning to ignore” training paradigm. CS+ inputs (green bars) were paired with the US (grey bars), while CS0 inputs (orange bars) were random with respect to the US. (<b>B</b>) Average Cortex excitatory unit activity (lower plots) and inhibitory unit activity (upper plots) at simulated, 20 ms time steps in response to unlearned stimuli (left side) compared with the end of a series of repeated presentations (right side). Excitatory responses were initially high to both stimuli, but after learning they increased only in response to the CS+, demonstrating the network has learned to ignore the CS0. (<b>C</b>) Averaged excitatory unit (left) and inhibitory unit (right) responses to the CS+ (green) and CS0 (orange) across presentations, as compared with non-stimulus periods (black line). Learning took place over the first 20 trials, after which excitatory responses to the CS0 plateaued to the same level as was observed with no inputs. This was due to increased inhibitory responses to the CS0. (<b>D</b>) Salience responses (<i>S</i>(<i>t</i>)) to the CS+ relative to the CS0 during final presentations are plotted for both control conditions and in simulations of inhibitory dysfunction (means ± STD across 30 model runs). Learning to ignore was impaired with inhibitory neuron disruption only in the inhibitory neuron plasticity model (<i>W</i><sup><b>x</b><b>→</b><b>I</b></sup>).</p

    Illustration of interaction between relevance learning and competitive learning.

    No full text
    <p>(<b>A</b>) Abstract depiction of a network after it is familiarized with stimuli but before it has been reinforced. Left plot depicts a state space with two sets of vectors: red arrows represent a set of activity patterns in the excitatory units, , blue arrow represents the synaptic weight matrix between these units and an “output” unit <i>i</i> in the Amygdala, . The dark red arrow represents a prototypical or average state vector . Right bar plot shows the input level of unit <i>i</i>, which is computed as the dot product of the the weight matrix and activity vector of the input units: . In this case the stimuli are not novel, so all of the associated state vectors in the input units have norms close to the homeostatic constant <i>H</i> (lengths of red arrows are approximately <i>H</i>). Also, since the weights are not yet trained, they are poorly aligned with , resulting in activity of <i>i</i> being lower than threshold <i>θ</i>. (<b>B</b>) The same plots as in <b>A</b> during learning. When <b>E</b>′ is paired with reinforcement (<i>u</i>(<i>t</i>) = 1), both relevance learning and competitive learning occur. Competitive learning pushes the weight vector in the direction of the mean of input vectors, (blue dotted arrow). Meanwhile, relevance learning increases the norm ∥<b>E</b>′∥<sub>2</sub> towards <i>H</i> + <i>A</i> (red dotted arrow). Although not shown here, the strength of competitive learning depends on the length of the activity state vector; i.e., learning will be stronger for novel or already-salient stimuli. (<b>C</b>) As previous, following combined competitive learning and relevance learning. The dot product, now exceeds threshold <i>θ</i>. This is thanks to both the alignment of the vectors from competitive learning and the increase in the length of <b>E</b>′ by relevance learning. Now <i>i</i> will become active in response to <b>E</b>′ even without an US. (<b>D</b>) The same state space is plotted with a vector depicting a different activity state (green arrow) evoked by a stimulus that has been familiarized but not reinforced. The poorer alignment between this new state and the weight matrix coupled with the shorter length for the input vector will yield a lower that does not exceed threshold, and thus fails to evoke a response.</p

    Demonstration of blocking and its impairment following inhibitory disruptions.

    No full text
    <p>(<b>A</b>) Illustration of the blocking paradigm: the model was first habituated to two stimuli (CS-A, CS-B; Pre-exposure), the CS-A and a US were then repeatedly presented at partially overlapping times (Conditioning), both CS-A and CS-B were then presented with the US (Blocking), followed by independent presentations of CS-A and CS-B (Testing). (<b>B</b>) Excitatory (lower plots) and inhibitory (upper plots) unit activity over 20 ms bins show the networks response to CS-A (left) and CS-B (right) at the end of the blocking paradigm. In spite of CS-B being paired with the US, the “blocked” stimulus did not elicit increased activity among excitatory units. (<b>C</b>) Excitatory (left) and inhibitory (right) unit responses to CS-A and CS-B over trials. Test epochs are expanded in insets. (<b>D</b>) Excitatory responses to CS-A relative to CS-B at the end of the test epoch are plotted in control simulations and simulations with dysfunctional inhibition (means ± STD across 30 model runs). The inhibitory neuron plasticity model (<i><b>W</b></i><sup><b>x</b><b>→</b><b>I</b></sup>) showed a loss of the blocking effect when inhibition was disrupted; unexpectedly, the excitatory neuron plasticity model (<i><b>W</b></i><sup><b>x</b><b>→</b><b>E</b></sup>) exhibited a reversal of the blocking effect; i.e., CS-B was learned more strongly than CS-A.</p

    Multiplexed stimulus category and relevance codes via simultaneous excitatory and inhibitory learning.

    No full text
    <p>(<b>A</b>) Diagram illustrating modified model that included both the mechanisms described above for relevance learning (on <i><b>W</b></i><sup><b>x</b><b>→</b><b>I</b></sup> synapses) in addition to mechanisms learning an output vector that matches categories presented as input (backpropagation algorithm applied to the <i><b>W</b></i><sup><b>E</b><b>→</b><b>y</b></sup> and <i><b>W</b></i><sup><b>x</b><b>→</b><b>E</b></sup> synapses). As illustrated by bottom boxes, one of ten stimuli presented to the network was rewarded. (<b>B</b>) Average excitatory unit responses to the one rewarded (green) and nine unrewarded stimuli (orange) over time. The network quickly learns to respond more strongly to the rewarded stimuli. (<b>C</b>) Performance of the model on input classification. Over the same time that the network learns to discriminate rewarded and unrewarded stimuli, it also becomes capable of matching the output vector to the input. The gray trace shows the percent of presentations that stimuli are correctly classified, which increases quickly before reaching a plateau. The blue trace shows the cross-entropy, an information measure (in natural units of information) based on the output activity distribution that is inversely related to the success of input classification.</p

    Overview of the proposed relevance code and network model.

    No full text
    <p>(<b>A</b>) Schematic illustrating the hypothesis that relevance (prediction of reward or punishment) is coded by levels of excitatory neuron output from a network, which is controlled by feedforward inhibition. (<b>B</b>) Basic structure of the network model. Left side shows feedforward connections from “Sensory” inputs, through inhibitory (<i>I</i>(<i>t</i>)) and excitatory (<b>E</b>(<i>t</i>)) “Cortex” units, with <b>E</b>(<i>t</i>) units feeding onto an output layer. Right-side shows how the salience signal (<i>S</i>(<i>t</i>)), computed from the overall level of excitatory unit activity, is combined with signals about environmental unconditioned stimuli (<i>u</i>(<i>t</i>)) to generate a prediction error that supervises the plasticity of connection weights between Sensory and Cortex layers.</p
    corecore