108,957 research outputs found

    Neural complexity: a graph theoretic interpretation

    No full text
    One of the central challenges facing modern neuroscience is to explain the ability of the nervous system to coherently integrate information across distinct functional modules in the absence of a central executive. To this end Tononi et al. [Proc. Nat. Acad. Sci. USA 91, 5033 (1994)] proposed a measure of neural complexity that purports to capture this property based on mutual information between complementary subsets of a system. Neural complexity, so defined, is one of a family of information theoretic metrics developed to measure the balance between the segregation and integration of a system's dynamics. One key question arising for such measures involves understanding how they are influenced by network topology. Sporns et al. [Cereb. Cortex 10, 127 (2000)] employed numerical models in order to determine the dependence of neural complexity on the topological features of a network. However, a complete picture has yet to be established. While De Lucia et al. [Phys. Rev. E 71, 016114 (2005)] made the first attempts at an analytical account of this relationship, their work utilized a formulation of neural complexity that, we argue, did not reflect the intuitions of the original work. In this paper we start by describing weighted connection matrices formed by applying a random continuous weight distribution to binary adjacency matrices. This allows us to derive an approximation for neural complexity in terms of the moments of the weight distribution and elementary graph motifs. In particular we explicitly establish a dependency of neural complexity on cyclic graph motifs

    Some Results on the Complexity of Numerical Integration

    Full text link
    This is a survey (21 pages, 124 references) written for the MCQMC 2014 conference in Leuven, April 2014. We start with the seminal paper of Bakhvalov (1959) and end with new results on the curse of dimension and on the complexity of oscillatory integrals. Some small errors of earlier versions are corrected

    Construction of quasi-Monte Carlo rules for multivariate integration in spaces of permutation-invariant functions

    Full text link
    We study multivariate integration of functions that are invariant under the permutation (of a subset) of their arguments. Recently, in Nuyens, Suryanarayana, and Weimar (Adv. Comput. Math. (2016), 42(1):55--84), the authors derived an upper estimate for the nnth minimal worst case error for such problems, and showed that under certain conditions this upper bound only weakly depends on the dimension. We extend these results by proposing two (semi-) explicit construction schemes. We develop a component-by-component algorithm to find the generating vector for a shifted rank-11 lattice rule that obtains a rate of convergence arbitrarily close to O(n−α)\mathcal{O}(n^{-\alpha}), where α>1/2\alpha>1/2 denotes the smoothness of our function space and nn is the number of cubature nodes. Further, we develop a semi-constructive algorithm that builds on point sets which can be used to approximate the integrands of interest with a small error; the cubature error is then bounded by the error of approximation. Here the same rate of convergence is achieved while the dependence of the error bounds on the dimension dd is significantly improved
    • …
    corecore