7 research outputs found

    The mean and standard deviation of the correlations between graph measures (see legend) and the activity measures (spike count, burst count, burst length, and burst size).

    No full text
    <p>The Eqn. 1 is used for calculating the correlation coefficients for each simulation setting separately. The set of networks consists of 150 repetitions of each of the (29) network types. In the panels on the left the mean correlation is taken over correlation coefficients in the twelve simulation settings that use binomial in-degree distribution, while in the panels on the right the twelve simulation settings with power-law distribution are used. The faded bars represent pairs of measures with absolute mean correlations smaller than 0.25. The graph measures that were finally chosen for structure-dynamics study are bolded in the legend.</p

    Structure-Dynamics Relationships in Bursting Neuronal Networks Revealed Using a Prediction Framework

    Get PDF
    <div><p>The question of how the structure of a neuronal network affects its functionality has gained a lot of attention in neuroscience. However, the vast majority of the studies on structure-dynamics relationships consider few types of network structures and assess limited numbers of structural measures. In this <i>in silico</i> study, we employ a wide diversity of network topologies and search among many possibilities the aspects of structure that have the greatest effect on the network excitability. The network activity is simulated using two point-neuron models, where the neurons are activated by noisy fluctuation of the membrane potential and their connections are described by chemical synapse models, and statistics on the number and quality of the emergent network bursts are collected for each network type. We apply a prediction framework to the obtained data in order to find out the most relevant aspects of network structure. In this framework, predictors that use different sets of graph-theoretic measures are trained to estimate the activity properties, such as burst count or burst length, of the networks. The performances of these predictors are compared with each other. We show that the best performance in prediction of activity properties for networks with sharp in-degree distribution is obtained when the prediction is based on clustering coefficient. By contrast, for networks with broad in-degree distribution, the maximum eigenvalue of the connectivity graph gives the most accurate prediction. The results shown for small () networks hold with few exceptions when different neuron models, different choices of neuron population and different average degrees are applied. We confirm our conclusions using larger () networks as well. Our findings reveal the relevance of different aspects of network structure from the viewpoint of network excitability, and our integrative method could serve as a general framework for structure-dynamics studies in biosciences.</p></div

    The 13 network motifs of three connected nodes.

    No full text
    <p>See <a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0069373#pone.0069373-Milo1" target="_blank">[31]</a> for reference.</p

    CC brings greatest improvements to the predictions of burst count (BC) and burst length (BL) in networks with binomial in-degree distribution.

    No full text
    <p><b>Left:</b> The y-axis shows the relative improvements with respect to null prediction. For each simulation setting, the prediction error for null predictor and the predictor with a considered graph property are calculated, using and . The relative improvements are averaged over all 12 simulation settings with binomial in-degree distribution. Plotted is the improvement (mean and std) for repetitions. The improvement obtained by using CC (*) in the prediction is significantly greater than that obtained by any other single graph measure. <b>Right:</b> The y-axis shows relative improvements with respect to prediction by other graph properties. As an example, the first bar shows the relative improvement , averaged over graph properties NB,OD,MEig,Mot5,Mot12. The improvements are further averaged over all 12 simulation settings, and the mean + std of repetitions are shown. The procedure is similar for the other bars. The improvement obtained by using CC (*) in the coprediction is significantly greater than that obtained by any other graph measure (U-test, ).</p

    Illustration of the HH (upper panels) and the LIF (lower panels) model dynamics.

    No full text
    <p><b>Left:</b> Single cell membrane potential with the spike magnified in the inset. The membrane potential at the time of spike in the LIF model explicitly set 30 mV for the sake of illustration. <b>Middle:</b> Network spike train in an excitatory-inhibitory RN with connectivity and binomial in-degree distribution. The upmost 20 neurons represent the inhibitory population. The red spike corresponds to the (first) spike shown in the left panel, and the burst with the red borders corresponds to the burst shown in the right panel. <b>Right:</b> The selected burst highlighted.</p

    Burst count is best predicted using CC when in-degree is binomial.

    No full text
    <p><b>A:</b> The bars in the three panels show the prediction errors (mean + std, ) when different graph properties are used as predictors. The errors are calculated as the difference between the predicted and realized number of bursts. The HH model is used in purely excitatory networks with binomial in-degree distribution and average connectivities (upper), (middle) and (lower). The leftmost bar (white) shows the mean prediction error of the null predictor. The next group of six bars shows the prediction errors of predictors with an additional graph property, in the order of descending prediction error. The next three bars correspond to the best three predictors that use two graph measures, and the next three bars correspond the ones with three measures. The final bar (black) shows the prediction error of the predictor that uses all available structural data (20 measures in addition to the realized degree). If the error is significantly (U-test, ) smaller than that of the null predictor, an asterisk (*) is plotted, whereas (**) announces that the error is also significantly smaller than that of the best predictor using one graph property (here always CC). The more graph measures are included in the prediction, the more accurate the prediction is. The error values shown are absolute: For reference, the mean burst counts (averaged over all network types) in the three connection probabilities are 3.4 (), 11.7 () and 31.5 (). <b>B:</b> Values of burst count plotted w.r.t. CC in networks with connection probability . Different network classes are plotted with different colors, and the different markers of WS1, WS2, FF, L2, L3, L4 and L6 networks represent different values of parameter (‘+’ for the lowest value, and stars for the highest value). One finds that the burst count ascends with increasing CC, as suggested by the positive correlation of burst count and CC in <a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0069373#pone-0069373-g005" target="_blank">Fig. 5</a>.</p

    CC gives the best prediction in most simulation settings with binomial in-degree distribution.

    No full text
    <p>The best predictor is named for each simulation setting (the 12 rows) and each activity property (the 4 columns: spike count, burst count, burst length, and burst size). The color of the box indicates the graph measure that gives the smallest prediction error when used together with the realized degree of the network. Boxes with stripes mean that another or several other graph measures have statistically indistinguishable error compared to the best predictor. The missing boxes indicate that no predictor is statistically better than the null predictor (U-test, ). CC wins 38 unique or shared best performances, as the respective numbers for Mot12, OD, NB, Mot5 and MEig are 11, 5, 5, 4, 1 and 1.</p
    corecore