10 research outputs found

    Capacity of networks with MPDP.

    No full text
    <p><b>A:</b> Fraction of pattern where the network generates an output spike within 2 ms distance of target time , and no spurious spikes. Network size is <i>N</i> = 1000. The desired spikes are learned within ≈ 600 steps. <b>B</b>: Average distance of output spikes to target for the same network size. Training continues even though the desired spikes are generated; however, they are pushed closer to the desired time. <b>C</b>: Average fraction of recalled spikes after 10000 learning blocks for all network sizes as a function of the load. Networks with <i>N</i> = 200 have a high probability to not be able to recall all spikes even for low loads. Otherwise, recall gets better with network size. The thin black line lies at fraction of recall equal to 90%. The critical load <i>α</i><sub>90</sub> is the point where the graph crosses this line. <b>D</b>: Average distance of recalled spikes as a function of the load. The lower the loads, the closer the output spike are to their desired location. <b>E</b>: Critical load as a function of network size for all four learning rules. MPDP reaches approximately half of the maximal capacity.</p

    Hebbian learning with homeostatic MPDP.

    No full text
    <p>A postsynaptic neuron is presented the same input pattern multiple times, alternating between teaching trials with teacher spike (blue trace) and recall trials (green trace) to test the output. Initially, all weights are zero (left). The green area between the voltage and threshold for potentiation <i>ϑ</i><sub><i>P</i></sub> signifies the total amount of potentiation, similarly the red area between voltage and <i>ϑ</i><sub><i>D</i></sub> for depression; the latter is only visible in the second to right panel. Learning is Hebbian initially until strong depolarization occurs (second to left). When the spike first appears during recall, it is still not at the exact location of the teacher spike (second to right). Continued learning moves it closer to the desired location. Also, the time windows of the voltage being above <i>ϑ</i><sub><i>D</i></sub> and below <i>ϑ</i><sub><i>P</i></sub> shrink and move closer in time (right). Synaptic plasticity almost stops. The number of learning trials before each state is 1, 16, 53, and 1600 from left to right.</p

    Learning of Precise Spike Times with Homeostatic Membrane Potential Dependent Synaptic Plasticity - Fig 1

    No full text
    <p><b>A:</b> The model network has a simple feed-forward structure. The top picture shows three pre- and one postsynaptic neurons, connected by synapses. Line width in this example corresponds to synaptic strength. Bottom picture shows the postsynaptic membrane potential in response to the input. <b>B</b>: Illustration of Anti-Hebbian Membrane Potential Dependent Plasticity (MPDP). A LIF neuron is presented twice with the same presynaptic input pattern. Excitation never exceeds <i>V</i><sub><i>thr</i></sub>. MPDP changes synapses to counteract hyperpolarization and depolarization occuring in the first presentation (blue trace), reducing (arrows) them on the second presentation (green trace). <b>C</b>: Homeostatic MPDP on inhibitory synapses is compatible with STDP as found in experiments. Weight change is tested for different temporal distances between pre- and postsynaptic spiking, with the presynaptic neuron being an inhibitory neuron. Δ<i>w</i> here denotes the change of the increase in conductance in an inhibitory synapse upon a presynaptic spike. The resulting spike timing characteristic is in agreement with experimental data on STDP of inhibitory synapses [<a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0148948#pone.0148948.ref016" target="_blank">16</a>]. Note that an increase of the weight leads to a suppressive effect on the membrane potential.</p

    Learning of Precise Spike Times with Homeostatic Membrane Potential Dependent Synaptic Plasticity

    No full text
    <div><p>Precise spatio-temporal patterns of neuronal action potentials underly e.g. sensory representations and control of muscle activities. However, it is not known how the synaptic efficacies in the neuronal networks of the brain adapt such that they can reliably generate spikes at specific points in time. Existing activity-dependent plasticity rules like Spike-Timing-Dependent Plasticity are agnostic to the goal of learning spike times. On the other hand, the existing formal and supervised learning algorithms perform a temporally precise comparison of projected activity with the target, but there is no known biologically plausible implementation of this comparison. Here, we propose a simple and local unsupervised synaptic plasticity mechanism that is derived from the requirement of a balanced membrane potential. Since the relevant signal for synaptic change is the postsynaptic voltage rather than spike times, we call the plasticity rule Membrane Potential Dependent Plasticity (MPDP). Combining our plasticity mechanism with spike after-hyperpolarization causes a sensitivity of synaptic change to pre- and postsynaptic spike times which can reproduce Hebbian spike timing dependent plasticity for inhibitory synapses as was found in experiments. In addition, the sensitivity of MPDP to the time course of the voltage when generating a spike allows MPDP to distinguish between weak (spurious) and strong (teacher) spikes, which therefore provides a neuronal basis for the comparison of actual and target activity. For spatio-temporal input spike patterns our conceptually simple plasticity rule achieves a surprisingly high storage capacity for spike associations. The sensitivity of the MPDP to the subthreshold membrane potential during training allows robust memory retrieval after learning even in the presence of activity corrupted by noise. We propose that MPDP represents a biophysically plausible mechanism to learn temporal target activity patterns.</p></div

    Hebbian learning with homeostatic MPDP on inhibitory synapses.

    No full text
    <p>A conductance based integrate-and-fire neuron is repeatedly presented with a fixed input pattern of activity in presynaptic inhibitory or excitatory neuron populations (top row—blue dots are excitatory, red dots are inhibitory spike times). The number of input neurons is <i>N</i><sub><i>i</i></sub> = 142 for the inhibitory population and <i>N</i><sub><i>e</i></sub> = 571 for the excitatory population. Second row shows the membrane potential before learning. The upper red line is the threshold for potentation of inhibitory synapses , lower red line is resting potential and threshold for depression . The third row shows the voltage as before with added teacher input by an additional population of excitatory neurons; this input induces a spike at <i>t</i> = 100<i>ms</i>. The fourth row shows the voltage after 100 learning steps with MPDP on inhibitory synapses only, with teacher input, the next row shows the recall without teacher. The spike is almost at the same position in the recall case. The last row shows the voltage after 1000 recall trials during which the inhibitory synapses were allowed to change under the MPDP rule. Despite this, the output spike is still close to the desired time, which shows that the output is approximately stable.</p

    Recall and capacity with input jitter.

    No full text
    <p><b>A:</b> Recall of networks trained noise-free with MPDP if during recall the input patterns are jittered (<i>N</i> = 1000). The black line lies on top of the blue and red ones (same in B). Up to <i>σ</i><sub><i>jitter</i></sub> = 0.5<i>ms</i>, the recall is unhindered. A curious feature is a “slump” in the recall for strong input jitter and intermediate loads. This slump is even more visible for the larger network with <i>N</i> = 2000 (<b>B</b>). The slump strongly correlates with the variance of the weights as a function of network load (<b>C</b> for <i>N</i> = 1000, <b>D</b> for <i>N</i> = 2000). The mean of the weights stays almost constant. <b>E:</b> Critical load as a function of input jitter during recall. The networks are trained noise free with different learning rules. Solid lines show <i>N</i> = 2000, dashed lines <i>N</i> = 1000. Crosses show sampling points. If a line is discontinued, this means that for this input jitter the networks do not reach 90% recall anymore. Recall for MPDP stays almost constant until <i>σ</i><sub><i>jitter</i></sub> = 0.5, while for the other learning rules a considerable drop-off of recall is visible. <b>F:</b> Noise free recall of networks trained with noisy input. For MPFP, E-Learning and FP-Learning alike the capacity drops with increasing training noise. The exception is ReSuMe. Here, the capacity strongly <i>increases</i> if the noise is small.</p

    Capacity of networks under input noise.

    No full text
    <p>All network are of size <i>N</i> = 1000. <b>A</b>: Recall as a function of the load for different levels of noise during recall. Noise is imposed as an additional stochstastic external current. Networks were trained with MPDP. Up to a noise level <i>σ</i><sub><i>input</i></sub> = 1<i>mV</i> during recall, there is almost no degradation of capacity. <b>B</b>: Same as A, but with stochastic input noise of width 0.5<i>mV</i> during network training. The capacity is slightly reduced, but resistance against noise is slightly better. <b>C</b> and <b>D</b>: Same as A and B, but the network was trained with FP-Learning. The capacity is doubled. However, the network trained without noise shows an immediate degradation of recall with noise. If the network is trained with noisy examples (D, <i>σ</i><sub><i>input</i></sub> = 0.5<i>mV</i>), also recall with noise of the same magnitude is perfect. <b>E</b>: Comparison of capacity of networks trained with MPDP and FP-Learning depending on input noise during training and recall. Solid lines: MPDP, dashed lines: FP-Learning. Lines that are cut off indicate that the network failed to reach 90% recall for higher noise. x-axis is noise level during recall. Different colors indicate noise level during training. Curiously, although FP-Learning suffers more from higher noise during recall than during training, the capacity drops less than with MPDP. <b>F</b>: Comparison of weight statistics of MPDP (solid lines) and FP-Learning (dashed lines) after learning. Left plot is the mean, right plot is the standard deviation. With MPDP, the weigths stay within a bounded regime, the mean is independent of noise or load during training; the cyan line for <i>ι</i> = 0.1 occludes the others. FP-Learning rescales the weights during training with noise: The mean becomes negative, and the standard deviation grows approximately linearly with noise level. This effectively scales down the noise by stochastic input.</p

    Low Abundant <i>N</i>‑linked Glycosylation in Hen Egg White Lysozyme Is Localized at Nonconsensus Sites

    No full text
    Although wild-type hen egg white lysozyme (HEL) is lacking the consensus sequence motif NX­(S/T), in 1995 Trudel et al. (<i>Biochem. Cell Biol.</i> <b>1995</b>, <i>73</i>, 307–309) proposed the existence of a low abundant <i>N</i>-glycosylated form of HEL; however, the identity of active glycosylation sites in HEL remained a matter of speculation. For the first time since Trudel’s initial work, we report here a comprehensive characterization by means of mass spectrometry of <i>N</i>-glycosylation in wild-type HEL. Our analytical approach comprised ZIC-HILIC enrichment of <i>N</i>-glycopeptides from HEL trypsin digest, deglycosylation by <sup>18</sup>O/PNGase F as well as by various endoglycosidases, and LC–MS/MS analysis of both intact and deglycosylated <i>N</i>-glycopeptides engaging multiple techniques of ionization and fragmentation. A novel data interpretation workflow based on MS/MS spectra classification and glycan database searching enabled the straightforward identification of the asparagine-rich <i>N</i>-glycopeptide [34–45] FESNFNTQATNR and allowed for compositional profiling of its modifying <i>N</i>-glycans. The overall heterogeneity profile of <i>N</i>-glycans in HEL comprised at least 26 different compositions. Results obtained from deglycosylation experiments provided clear evidence of asparagine residues N44 and N39 representing active glycosylation sites in HEL. Both of these sites do not fall into any known <i>N</i>-glycosylation-specific sequence motif but are localized in rarely observed nonconsensus sequons (NXN, NXQ)

    Biogenic volatile release from permafrost thaw determined by microbial soil sink

    No full text
    Supplementary data for the article:<br>Kramshøj et al. (2018) Biogenic volatile release from permafrost thaw determined by microbial soil sink. Nature Communications.<br><br

    Adsorption Behavior of Lysozyme at Titanium Oxide–Water Interfaces

    No full text
    We present an in situ X-ray reflectivity study of the adsorption behavior of the protein lysozyme on titanium oxide layers under variation of different thermodynamic parameters, such as temperature, hydrostatic pressure, and pH value. Moreover, by varying the layer thickness of the titanium oxide layer on a silicon wafer, changes in the adsorption behavior of lysozyme were studied. In total, we determined less adsorption on titanium oxide compared with silicon dioxide, while increasing the titanium oxide layer thickness causes stronger adsorption. Furthermore, the variation of temperature from 20 to 80 °C yields an increase in the amount of adsorbed lysozyme at the interface. Additional measurements with variation of the pH value of the system in a region between pH 2 and 12 show that the surface charge of both protein and titanium oxide has a crucial role in the adsorption process. Further pressure-dependent experiments between 50 and 5000 bar show a reduction of the amount of adsorbed lysozyme with increasing pressure
    corecore