40 research outputs found

    The Interplay between Branching and Pruning on Neuronal Target Search during Developmental Growth: Functional Role and Implications

    Get PDF
    Regenerative strategies that facilitate the regrowth and reconnection of neurons are some of the most promising methods in spinal cord injury research. An essential part of these strategies is an increased understanding of the mechanisms by which growing neurites seek out and synapse with viable targets. In this paper, we use computational and theoretical tools to examine the targeting efficiency of growing neurites subject to limited resources, such as maximum total neural tree length. We find that in order to efficiently reach a particular target, growing neurites must achieve balance between pruning and branching: rapidly growing neurites that do not prune will exhaust their resources, and frequently pruning neurites will fail to explore space effectively. We also find that the optimal branching/pruning balance must shift as the target distance changes: different strategies are called for to reach nearby vs. distant targets. This suggests the existence of a currently unidentified higher-level regulatory factor to control arborization dynamics. We propose that these findings may be useful in future therapies seeking to improve targeting rates through manipulation of arborization behaviors

    Differential Consolidation and Pattern Reverberations within Episodic Cell Assemblies in the Mouse Hippocampus

    Get PDF
    One hallmark feature of consolidation of episodic memory is that only a fraction of original information, which is usually in a more abstract form, is selected for long-term memory storage. How does the brain perform these differential memory consolidations? To investigate the neural network mechanism that governs this selective consolidation process, we use a set of distinct fearful events to study if and how hippocampal CA1 cells engage in selective memory encoding and consolidation. We show that these distinct episodes activate a unique assembly of CA1 episodic cells, or neural cliques, whose response-selectivity ranges from general-to-specific features. A series of parametric analyses further reveal that post-learning CA1 episodic pattern replays or reverberations are mostly mediated by cells exhibiting event intensity-invariant responses, not by the intensity-sensitive cells. More importantly, reactivation cross-correlations displayed by intensity-invariant cells encoding general episodic features during immediate post-learning period tend to be stronger than those displayed by invariant cells encoding specific features. These differential reactivations within the CA1 episodic cell populations can thus provide the hippocampus with a selection mechanism to consolidate preferentially more generalized knowledge for long-term memory storage

    Subspace Projection Approaches to Classification and Visualization of Neural Network-Level Encoding Patterns

    Get PDF
    Recent advances in large-scale ensemble recordings allow monitoring of activity patterns of several hundreds of neurons in freely behaving animals. The emergence of such high-dimensional datasets poses challenges for the identification and analysis of dynamical network patterns. While several types of multivariate statistical methods have been used for integrating responses from multiple neurons, their effectiveness in pattern classification and predictive power has not been compared in a direct and systematic manner. Here we systematically employed a series of projection methods, such as Multiple Discriminant Analysis (MDA), Principal Components Analysis (PCA) and Artificial Neural Networks (ANN), and compared them with non-projection multivariate statistical methods such as Multivariate Gaussian Distributions (MGD). Our analyses of hippocampal data recorded during episodic memory events and cortical data simulated during face perception or arm movements illustrate how low-dimensional encoding subspaces can reveal the existence of network-level ensemble representations. We show how the use of regularization methods can prevent these statistical methods from over-fitting of training data sets when the trial numbers are much smaller than the number of recorded units. Moreover, we investigated the extent to which the computations implemented by the projection methods reflect the underlying hierarchical properties of the neural populations. Based on their ability to extract the essential features for pattern classification, we conclude that the typical performance ranking of these methods on under-sampled neural data of large dimension is MDA>PCA>ANN>MGD

    Statistics of branch dynamics observed in evolving neural trees.

    No full text
    <p>(a) Computing Probabilities through Single Branch Evolution Technique. After extending for one time step, an evolving neurite can undergo a branching decision in which it either branches with probability p, or extends without branching with probability q. Spawning of new daughter branches results in the termination of the parent branch. As such, at the second time step, if the neurite terminates and remains at length 2, it does so with total probability p*q. If the neurite does not branch and continues to grow, it does so with total probability q*q. As a rule, the only way that a neurite can achieve a length of n is to extend continuously for n−1 time steps and then branch. The entire sequence would therefore occur with total probability p* q<sup>n−1</sup> (b) Computing Probabilities through Population Analysis of Evolved Trees. In contrast to computing probabilities of single branches as they evolve through time, a statistical analysis can be performed on instantiated trees. That is, a population distribution can be generated based on examining all possible configurations that mature (non-evolving) branches can adopt after each time step. The same probability assignments of branching with termination, and extension without branching, apply here as in (a). Note that after a few time steps, the trees start adopting non-simplistic structures. For example at t = 3, the simplest tree is a single evolving branch of length 3; which is obtained with a probability of q*q. At the opposite end of the spectrum, the most complex tree contains 4 active branches of length 1, obtained with probability p*p. Note that trees with a combination of extending and branching arbors, can occur as statistically identical configurations. Gray boxes demarcate these “isomeric” trees within all possible permutations of arbor geometries. 6 type of trees are obtained after three time steps, while 46 types of trees are obtained at the next time step. The associated probabilities can be determined by computing the products of individual probabilities along the arrows. (c) A Comparison of the Computational Results obtained from (a) and (b) at timestep t = 4. The expected value for a single branch L<sub>average</sub> = (p+2*p*q) shown in blue is compared against the average branch value obtained from tree statistics (L<sub>average tree</sub>), shown in red, for different values of branching probability p. (d) At timestep t = 4, the relative difference (L<sub>average tree</sub>−L<sub>average</sub>)/L<sub>average</sub> is plotted at different values of p. (e) Average branch values of trees obtained in numerical simulations at t = 200 (red curve) are consistently smaller than the expected values obtained from single-branch evolution. As the branching probability increases to 1, the difference between these two estimates becomes 0. (f) After t = 200 timesteps, the relative difference (L<sub>average tree</sub>−L<sub>average</sub>)/L<sub>average</sub> is plotted as a function of branching probability.</p

    Probability table for branches that are allowed to evolve for two time steps.

    No full text
    <p>In the first possible scenario, the neurite branches at the end of first timestep and cease to evolve. Its final length will be 1 and the probability of this outcome is <b>p</b>. In the second scenario, the neurite grows to a total length of two at the end of the second time step, and it has the potential to grow even further at later times. The probability of the second scenario is <b>q = 1−p</b>.</p

    Comparison of equal length trees under different branching and pruning scenarios.

    No full text
    <p>(a) L<sub>o</sub> = 0.9, P<sub>prune</sub> = 0; (b) L<sub>o</sub> = 0.7, P<sub>prune</sub> = 0.25; (c) L<sub>o</sub> = 0.7, P<sub>prune</sub> = 0.25; L = 20, D = 2.75 in all cases. Gray branches are pruned. Note that pruned trees acheive a wider coverage of targets, extending outside of the dashed lines in panels (b) and (c); by the same token, pruning creates less uniform coverage thus canceling searches in the cross-hatched regions.</p

    Time sequences showing branching and pruning of dissociated E11 chick dorsal root ganglion neurites.

    No full text
    <p>(a) Branching (red arrow) and extension (blue arrowheads) of primary axons. (b) Extension and retraction (blue arrowheads) of neurite tip. (c) Tertiary branching and pruning (encircled). Cultures are grown in the presence of glia in 5% CO<sub>2</sub>/ 37°C on Poly-L-lysine/laminin in N3 complete serum-free media. Phase-contrast live imaging at 28 hrs post-plating. Time interval between acquisitions for each time series is as follows: (a) 30 mins, (b) 75 mins, (c) 75 mins. Snapshots are contrast enhanced for visual clarity of the neurites.</p

    Estimates for the number of hits at a distance D for evolving neurons.

    No full text
    <p>Cases shown use the following branching rates in (a) P<sub>branch</sub> = 0.3; (b) P<sub>branch</sub> = 0.5; (c) P<sub>branch</sub> = 0.7. All these three panels show results from (i) full-fledged numerical simulation (blue, averaged over 100 runs), (ii) simplified trees that have a stochastic branching time (red, where the time intervals between branching decision does not have a fixed length), and (iii) from theoretical considerations (green). As expected, the use of lower branching probabilities reduces the number of hits at smaller distances, but allows reaching targets that are further away. The position of the ‘optimal’ targeting distance for distinct branching probabilities is in qualitative agreement over the range of probabilities considered here. Note that the stochastic (red) and theoretical (green) curves both have discontinuous first derivatives, in contrast to the numerical (blue) curves. (d) Statistical results for neural trees with branching or pruning for L<sub>max</sub> = 1000 units. Plots of targeting rates from numerical simulations indicate that neurites with low probabilities of branching reach further, but fill less surrounding space. (e) A complementary result is that at low probabilities for branching, it takes longer for a neuron to exhaust its resources and reach the maximum allowable arbor length, L<sub>max</sub>. (f) Neural trees that have been generated at higher pruning probabilities reach further from the origin and (g) take more time to finalize the ultimate arbor. We note that P<sub>branch</sub> = 0.3 in panels c and d.</p
    corecore