20 research outputs found

    Noise can maintain the network in a high-complexity state.

    No full text
    <p>(A–B): Set complexity trajectories of single simulations of Poisson networks with moderate (, A) and high (, B) levels of noise. (C): Medians of set complexity trajectories for noisy Poisson networks with different degrees and flip probabilities . The complexity trajectory of the maximally noisy network that is identical for all is plotted in grey. 100 independent samples were used.</p

    The propagation of NCD distributions explains the time course of the set complexity.

    No full text
    <p>The panels show the distributions of NCD values on interval in noiseless (left), moderately noisy (middle) and highly noisy (right) Poisson networks with . The time instant of observation grows downwards with the figures plotted: The curve plotted for corresponds to the distribution of off-diagonal elements of NCD matrix , while the curve for corresponds to , and so forth. The distributions are pooled across 100 network realizations and smoothened with a Gaussian filter with standard deviation 0.02. The mean of the NCD distribution in noiseless critical networks (left) passes 0.5 around time instant , as expected from the complexity peak at in <a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0056523#pone-0056523-g001" target="_blank">Fig. 1</a>. The small peaks of noiseless networks in the regime of low NCD correspond to point-attractors. In these attractors the state remains constant, and since the Kolmogorov complexity of a dublicated string is not much higher than that of the original (), the resulting NCD values are very small. The mean of the NCD distribution in Poisson networks with moderate noise (middle) approaches 0.5 as time passes, accounting for the high set complexity values in the regime of large in <a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0056523#pone-0056523-g002" target="_blank">Fig. 2</a>. In highly noisy networks (right) the NCD distributions have only values that are notably higher than 0.5 due to the excess of randomness, and hence the low set complexity value for these networks in <a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0056523#pone-0056523-g002" target="_blank">Fig. 2</a>.</p

    Balance between Noise and Information Flow Maximizes Set Complexity of Network Dynamics

    Get PDF
    <div><p>Boolean networks have been used as a discrete model for several biological systems, including metabolic and genetic regulatory networks. Due to their simplicity they offer a firm foundation for generic studies of physical systems. In this work we show, using a measure of context-dependent information, set complexity, that prior to reaching an attractor, random Boolean networks pass through a transient state characterized by high complexity. We justify this finding with a use of another measure of complexity, namely, the statistical complexity. We show that the networks can be tuned to the regime of maximal complexity by adding a suitable amount of noise to the deterministic Boolean dynamics. In fact, we show that for networks with Poisson degree distributions, all networks ranging from subcritical to slightly supercritical can be tuned with noise to reach maximal set complexity in their dynamics. For networks with a fixed number of inputs this is true for near-to-critical networks. This increase in complexity is obtained at the expense of disruption in information flow. For a large ensemble of networks showing maximal complexity, there exists a balance between noise and contracting dynamics in the state space. In networks that are close to critical the intrinsic noise required for the tuning is smaller and thus also has the smallest effect in terms of the information processing in the system. Our results suggest that the maximization of complexity near to the state transition might be a more general phenomenon in physical systems, and that noise present in a system may in fact be useful in retaining the system in a state with high information content.</p> </div

    Set complexity time series for random Poisson Boolean networks shows temporal maximum prior to reaching the attractor in several networks with different mean number of inputs

    No full text
    <p><b>.</b> (A–B): Set complexity trajectories of single simulations of (A) and (B) networks. The first arrivals to the attractor are marked with stars. (C) The median set complexity of 100 simulation results for five different s. The stars above the curves show the median of the time instant of first arrival to the attractor.</p

    Poisson networks can be set a noise level that maximizes the steady-state set complexity.

    No full text
    <p>The color of the plot shows the steady-state set complexity of Boolean network dynamics for both Poisson networks (left) and fixed- networks with (right) as functions of sensitivity and flip probability . For each simulation, a median of set complexities is taken over time steps . Further averaged, the color shows the median of simulations, smoothened with bilinear interpolation. The lower panels show the maximum of the plane, taken over the flip probability.</p

    The subcritical Poisson networks lose their high steady-state complexity when nodes with zero inputs are neglected.

    No full text
    <p>In this figure, the set complexity is calculated similarly to the Poisson network steady-state complexity in 5, but only states of those nodes that receive at least one input from the system are included in the strings .</p

    Asynchronous Poisson RBNs show qualitatively the same set complexity statistics as the synchronous ones.

    No full text
    <p>The color of the plot shows steady-state set complexities of asynchronous Boolean network dynamics for Poisson networks as functions of sensitivity and flip probability . The synchronous state update described in the Methods section is replaced by successive single-node state updates. The node to update is picked by random every time instant, and thereby after the state updates some nodes have most probably been updated several times and some nodes none. The set complexities are calculated for states at the modulus- time steps . Similarly to the <a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0056523#pone-0056523-g005" target="_blank">Fig. 5</a>, a median of set complexities is taken over time steps , and the color of the plot shows the median of simulations, smoothened with bilinear interpolation. The lower panels show the maximum of the plane, taken over the flip probability. A slight difference to <a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0056523#pone-0056523-g005" target="_blank">Fig. 5</a> is that in asynchronous networks the high-complexity regime extends more to the chaotic () regime. This is in agreement with <a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0056523#pone.0056523-Gershenson1" target="_blank">[46]</a>, where networks with random asynchronous updating schemes were observed to reside more often in an attractor than their synchronous counterparts, suggesting that their dynamics be on average more redundant.</p

    Structure-Dynamics Relationships in Bursting Neuronal Networks Revealed Using a Prediction Framework

    Get PDF
    <div><p>The question of how the structure of a neuronal network affects its functionality has gained a lot of attention in neuroscience. However, the vast majority of the studies on structure-dynamics relationships consider few types of network structures and assess limited numbers of structural measures. In this <i>in silico</i> study, we employ a wide diversity of network topologies and search among many possibilities the aspects of structure that have the greatest effect on the network excitability. The network activity is simulated using two point-neuron models, where the neurons are activated by noisy fluctuation of the membrane potential and their connections are described by chemical synapse models, and statistics on the number and quality of the emergent network bursts are collected for each network type. We apply a prediction framework to the obtained data in order to find out the most relevant aspects of network structure. In this framework, predictors that use different sets of graph-theoretic measures are trained to estimate the activity properties, such as burst count or burst length, of the networks. The performances of these predictors are compared with each other. We show that the best performance in prediction of activity properties for networks with sharp in-degree distribution is obtained when the prediction is based on clustering coefficient. By contrast, for networks with broad in-degree distribution, the maximum eigenvalue of the connectivity graph gives the most accurate prediction. The results shown for small () networks hold with few exceptions when different neuron models, different choices of neuron population and different average degrees are applied. We confirm our conclusions using larger () networks as well. Our findings reveal the relevance of different aspects of network structure from the viewpoint of network excitability, and our integrative method could serve as a general framework for structure-dynamics studies in biosciences.</p></div

    CC brings greatest improvements to the predictions of burst count (BC) and burst length (BL) in networks with binomial in-degree distribution.

    No full text
    <p><b>Left:</b> The y-axis shows the relative improvements with respect to null prediction. For each simulation setting, the prediction error for null predictor and the predictor with a considered graph property are calculated, using and . The relative improvements are averaged over all 12 simulation settings with binomial in-degree distribution. Plotted is the improvement (mean and std) for repetitions. The improvement obtained by using CC (*) in the prediction is significantly greater than that obtained by any other single graph measure. <b>Right:</b> The y-axis shows relative improvements with respect to prediction by other graph properties. As an example, the first bar shows the relative improvement , averaged over graph properties NB,OD,MEig,Mot5,Mot12. The improvements are further averaged over all 12 simulation settings, and the mean + std of repetitions are shown. The procedure is similar for the other bars. The improvement obtained by using CC (*) in the coprediction is significantly greater than that obtained by any other graph measure (U-test, ).</p

    The mean and standard deviation of the correlations between graph measures (see legend) and the activity measures (spike count, burst count, burst length, and burst size).

    No full text
    <p>The Eqn. 1 is used for calculating the correlation coefficients for each simulation setting separately. The set of networks consists of 150 repetitions of each of the (29) network types. In the panels on the left the mean correlation is taken over correlation coefficients in the twelve simulation settings that use binomial in-degree distribution, while in the panels on the right the twelve simulation settings with power-law distribution are used. The faded bars represent pairs of measures with absolute mean correlations smaller than 0.25. The graph measures that were finally chosen for structure-dynamics study are bolded in the legend.</p
    corecore