12 research outputs found

    Automatic Adaptation to Fast Input Changes in a Time-Invariant Neural Circuit

    No full text
    <div><p>Neurons must faithfully encode signals that can vary over many orders of magnitude despite having only limited dynamic ranges. For a correlated signal, this dynamic range constraint can be relieved by subtracting away components of the signal that can be predicted from the past, a strategy known as predictive coding, that relies on learning the input statistics. However, the statistics of input natural signals can also vary over very short time scales e.g., following saccades across a visual scene. To maintain a reduced transmission cost to signals with rapidly varying statistics, neuronal circuits implementing predictive coding must also rapidly adapt their properties. Experimentally, in different sensory modalities, sensory neurons have shown such adaptations within 100 ms of an input change. Here, we show first that linear neurons connected in a feedback inhibitory circuit can implement predictive coding. We then show that adding a rectification nonlinearity to such a feedback inhibitory circuit allows it to automatically adapt and approximate the performance of an optimal linear predictive coding network, over a wide range of inputs, while keeping its underlying temporal and synaptic properties unchanged. We demonstrate that the resulting changes to the linearized temporal filters of this nonlinear network match the fast adaptations observed experimentally in different sensory modalities, in different vertebrate species. Therefore, the nonlinear feedback inhibitory network can provide automatic adaptation to fast varying signals, maintaining the dynamic range necessary for accurate neuronal transmission of natural inputs.</p></div

    Parameters of the optimized linear predictive coding algorithm.

    No full text
    <p>(a) <i>β</i> = <i>β</i>(<i>τ</i><sub><i>s</i></sub>) (b) Λ<sup>*</sup> = Λ<sup>*</sup>(<i>β</i>, <i>σ</i>): Eq (<a href="http://www.ploscompbiol.org/article/info:doi/10.1371/journal.pcbi.1004315#pcbi.1004315.e006" target="_blank">6</a>) for two values of <i>β</i>.</p

    Response filters of zebra finch auditory neurons ((a), adapted from [28]) compared to theoretically simulated response filters of a nonlinear feedback inhibitory circuit (d).

    No full text
    <p>(a-c) Adapted from [<a href="http://www.ploscompbiol.org/article/info:doi/10.1371/journal.pcbi.1004315#pcbi.1004315.ref028" target="_blank">28</a>]. (a) Linear response filters for a single neuron stimulated with inputs of different mean sound amplitudes (colored to match (d)). (b) Ratio of the total positive to total negative component of neuronal filters is computed for an input with high mean amplitude, and an input with low mean amplitude. Each circle shows these values for a different recorded neuron (one example being (a)). The colored circle is derived from the simulated response filters of the principal cell of the nonlinear circuit (d). It uses the ratio of the total positive to total negative component of the simulated filters (plotted in (e)) at the highest input amplitude (red), and the lowest input amplitude (cyan). (c) The BMF (freq. of the peak of the Fourier transform of the linear response filters) for a high mean input, and a low mean input (for different neurons, one example being (a)). The colored circle is derived from the simulated responses of the principal cell of the nonlinear network (d). It uses the BMFs (shown in (f)) of the simulated filters at the highest input amplitude (red) and the lowest input amplitude (cyan). (d) Linear filters estimated to best approximate the response of the principal cell of the nonlinear circuit to white noise stimuli of different amplitudes (Methods). Curves colored to match (a). (e) The ratio of the total positive component of the curves in (d) to the total negative component. Points are colored as in (a,d). The ratios derived from inputs with highest (red) and lowest amplitude (cyan) define the colored circle in (b). (f) The BMF of the responses filters from (d). The BMF values derived from inputs with highest (red) and lowest amplitude (cyan) define the colored circle in (c).</p

    Comparing the performance (measured through network gain) of nonlinear and linear feedback inhibitory networks (measured through network gain: lower is better) (details in Methods).

    No full text
    <p>(a) Two input mixtures, modeling rapid transition from predictable to unpredictable input components. (b) Description of simulations. Inputs constructed as in (a). At each time point, inputs are either pure predictable signal, or pure unpredictable noise, with an instantaneous transition from one type to the other, in the middle of the simulation period. (c-f) Simulation outputs (inputs shown in inset). The amplitude of the unpredictable component of the mixture varies along the x axis. Error bars are 1 std. dev. (c,e) Network gain of the linear network of type 1, optimized to the mixture (blue, non-adapted linear response), is significantly higher than that of the nonlinear network (red). In contrast, the nonlinear network gain is close to the response of the optimal linear network of type 2, which is allowed to adapt to each component of the mixture (dotted black, adapted linear response). (e) Green shading indicates region where the nonlinear response is more than one std. dev. lower than the non-adapted linear response. Diagonal hashing indicates region where the nonlinear response is within one std. dev. of the adapted linear response. (d,f) % improvement of the performance of the nonlinear network over the type 1 linear network at different amplitudes of the unpredictable component. Data taken from (c) and (e) respectively. (f) Green box indicates region where the improvement is more than one std. dev. different from 0.</p

    Example segmentations on natural images.

    No full text
    <p><i>Top row</i>: Despite having a very noisy boundary map, using additional cues allows us to segment the objects successfully. <i>Middle row</i>: Although there are many weak edges, region-based texture information helps give a correct segmentation. <i>Bottom row</i>: A failure case, where the similar texture of elephants causes them to be merged even though a faint boundary exists between them. For all rows, the VI ODS threshold was used. The rows correspond top to bottom to the points identified in <a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0071715#pone-0071715-g007" target="_blank">Figure 7</a>.</p

    Evaluation on BSDS500. Higher is better for all measures except VI, for which lower is better.

    No full text
    <p>ODS uses the optimal scale for the entire dataset while OIS uses the optimal scale for each image.</p

    Split VI plot for different learning or agglomeration methods.

    No full text
    <p>Shaded areas correspond to mean standard error of the mean. “Best” segmentation is given by optimal agglomeration of superpixels by comparing to the gold standard segmentation. This point is not because the superpixel boundaries do not exactly correspond to those used to generate the gold standard. The standard deviation of this point () is smaller than the marker denoting it. Stars mark minimum VI (sum of false splits and false merges), circles mark VI at threshold 0.5.</p

    Agglomerative learning improves merge probability estimates during agglomeration.

    No full text
    <p>(Flat learning is equivalent to 0 agglomerative training epochs.) (a) VI as a function of threshold for mean, flat learning, and agglomerative learning (5 epochs). Stars indicate minimum VI, circles indicate VI at . (b) VI as a function of the number of training epochs. The improvement in minimum VI afforded by agglomerative learning is minor (though significant), but the improvement at is much greater, and the minimum VI and VI at are very close for 4 or more epochs.</p

    Evaluation of segmentation algorithms on BSDS500.

    No full text
    <p>Left: split-VI plot. Stars represent optimal VI (minimum sum of x and y axis), circles represent VI at threshold . Right: boundary precision-recall plot.</p

    Representative 3D EM data and sample reconstructions.

    No full text
    <p>Note that the data is isotropic, meaning it has the same resolution along every axis. The goal of segmentation here is to partition the volume into individual neurons, two of which are shown in orange and blue. The volume is densely packed by these thin neuronal processes taking long, tortuous paths.</p
    corecore