42,966 research outputs found

    Experimental analysis and computational modeling of interburst intervals in spontaneous activity of cortical neuronal culture

    Get PDF
    Rhythmic bursting is the most striking behavior of cultured cortical networks and may start in the second week after plating. In this study, we focus on the intervals between spontaneously occurring bursts, and compare experimentally recorded values with model simulations. In the models, we use standard neurons and synapses, with physiologically plausible parameters taken from literature. All networks had a random recurrent architecture with sparsely connected neurons. The number of neurons varied between 500 and 5,000. We find that network models with homogeneous synaptic strengths produce asynchronous spiking or stable regular bursts. The latter, however, are in a range not seen in recordings. By increasing the synaptic strength in a (randomly chosen) subset of neurons, our simulations show interburst intervals (IBIs) that agree better with in vitro experiments. In this regime, called weakly synchronized, the models produce irregular network bursts, which are initiated by neurons with relatively stronger synapses. In some noise-driven networks, a subthreshold, deterministic, input is applied to neurons with strong synapses, to mimic pacemaker network drive. We show that models with such “intrinsically active neurons” (pacemaker-driven models) tend to generate IBIs that are determined by the frequency of the fastest pacemaker and do not resemble experimental data. Alternatively, noise-driven models yield realistic IBIs. Generally, we found that large-scale noise-driven neuronal network models required synaptic strengths with a bimodal distribution to reproduce the experimentally observed IBI range. Our results imply that the results obtained from small network models cannot simply be extrapolated to models of more realistic size. Synaptic strengths in large-scale neuronal network simulations need readjustment to a bimodal distribution, whereas small networks do not require such change

    Knotting probabilities after a local strand passage in unknotted self-avoiding polygons

    Full text link
    We investigate the knotting probability after a local strand passage is performed in an unknotted self-avoiding polygon on the simple cubic lattice. We assume that two polygon segments have already been brought close together for the purpose of performing a strand passage, and model this using Theta-SAPs, polygons that contain the pattern Theta at a fixed location. It is proved that the number of n-edge Theta-SAPs grows exponentially (with n) at the same rate as the total number of n-edge unknotted self-avoiding polygons, and that the same holds for subsets of n-edge Theta-SAPs that yield a specific after-strand-passage knot-type. Thus the probability of a given after-strand-passage knot-type does not grow (or decay) exponentially with n, and we conjecture that instead it approaches a knot-type dependent amplitude ratio lying strictly between 0 and 1. This is supported by critical exponent estimates obtained from a new maximum likelihood method for Theta-SAPs that are generated by a composite (aka multiple) Markov Chain Monte Carlo BFACF algorithm. We also give strong numerical evidence that the after-strand-passage knotting probability depends on the local structure around the strand passage site. Considering both the local structure and the crossing-sign at the strand passage site, we observe that the more "compact" the local structure, the less likely the after-strand-passage polygon is to be knotted. This trend is consistent with results from other strand-passage models, however, we are the first to note the influence of the crossing-sign information. Two measures of "compactness" are used: the size of a smallest polygon that contains the structure and the structure's "opening" angle. The opening angle definition is consistent with one that is measurable from single molecule DNA experiments.Comment: 31 pages, 12 figures, submitted to Journal of Physics

    The size distribution of innovations revisited: an application of extreme value statistics to citation and value measures of patent significance

    Get PDF
    This paper focuses on the analysis of size distributions of innovations, which are known to be highly skewed. We use patent citations as one indicator of innovation significance, constructing two large datasets from the European and US Patent Offices at a high level of aggregation, and the Trajtenberg (1990) dataset on CT scanners at a very low one. We also study self-assessed reports of patented innovation values using two very recent patent valuation datasets from the Netherlands and the UK, as well as a small dataset of patent license revenues of Harvard University. Statistical methods are applied to analyse the properties of the empirical size distributions, where we put special emphasis on testing for the existence of ‘heavy tails’, i.e., whether or not the probability of very large innovations declines more slowly than exponentially. While overall the distributions appear to resemble a lognormal, we argue that the tails are indeed fat. We invoke some recent results from extreme value statistics and apply the Hill (1975) estimator with data-driven cut-offs to determine the tail index for the right tails of all datasets except the NL and UK patent valuations. On these latter datasets we use a maximum likelihood estimator for grouped data to estimate the Pareto exponent for varying definitions of the right tail. We find significantly and consistently lower tail estimates for the returns data than the citation data (around 0.7 vs. 3-5). The EPO and US patent citation tail indices are roughly constant over time (although the US one does grow somewhat in the last periods) but the latter estimates are significantly lower than the former. The heaviness of the tails, particularly as measured by financial indices, we argue, has significant implications for technology policy and growth theory, since the second and possibly even the first moments of these distributions may not exist. (JEL Codes: C16, O31, O33 Keywords: returns to invention, patent citations, extreme-value statistics, skewed distributions, heavy tails.)mathematical economics and econometrics ;

    A Quantile Variant of the EM Algorithm and Its Applications to Parameter Estimation with Interval Data

    Full text link
    The expectation-maximization (EM) algorithm is a powerful computational technique for finding the maximum likelihood estimates for parametric models when the data are not fully observed. The EM is best suited for situations where the expectation in each E-step and the maximization in each M-step are straightforward. A difficulty with the implementation of the EM algorithm is that each E-step requires the integration of the log-likelihood function in closed form. The explicit integration can be avoided by using what is known as the Monte Carlo EM (MCEM) algorithm. The MCEM uses a random sample to estimate the integral at each E-step. However, the problem with the MCEM is that it often converges to the integral quite slowly and the convergence behavior can also be unstable, which causes a computational burden. In this paper, we propose what we refer to as the quantile variant of the EM (QEM) algorithm. We prove that the proposed QEM method has an accuracy of O(1/K2)O(1/K^2) while the MCEM method has an accuracy of Op(1/K)O_p(1/\sqrt{K}). Thus, the proposed QEM method possesses faster and more stable convergence properties when compared with the MCEM algorithm. The improved performance is illustrated through the numerical studies. Several practical examples illustrating its use in interval-censored data problems are also provided

    Density estimation for grouped data with application to line transect sampling

    Full text link
    Line transect sampling is a method used to estimate wildlife populations, with the resulting data often grouped in intervals. Estimating the density from grouped data can be challenging. In this paper we propose a kernel density estimator of wildlife population density for such grouped data. Our method uses a combined cross-validation and smoothed bootstrap approach to select the optimal bandwidth for grouped data. Our simulation study shows that with the smoothing parameter selected with this method, the estimated density from grouped data matches the true density more closely than with other approaches. Using smoothed bootstrap, we also construct bias-adjusted confidence intervals for the value of the density at the boundary. We apply the proposed method to two grouped data sets, one from a wooden stake study where the true density is known, and the other from a survey of kangaroos in Australia.Comment: Published in at http://dx.doi.org/10.1214/09-AOAS307 the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org
    corecore