360,386 research outputs found

    Propagation of chaos in neural fields

    Full text link
    We consider the problem of the limit of bio-inspired spatially extended neuronal networks including an infinite number of neuronal types (space locations), with space-dependent propagation delays modeling neural fields. The propagation of chaos property is proved in this setting under mild assumptions on the neuronal dynamics, valid for most models used in neuroscience, in a mesoscopic limit, the neural-field limit, in which we can resolve the quite fine structure of the neuron's activity in space and where averaging effects occur. The mean-field equations obtained are of a new type: they take the form of well-posed infinite-dimensional delayed integro-differential equations with a nonlocal mean-field term and a singular spatio-temporal Brownian motion. We also show how these intricate equations can be used in practice to uncover mathematically the precise mesoscopic dynamics of the neural field in a particular model where the mean-field equations exactly reduce to deterministic nonlinear delayed integro-differential equations. These results have several theoretical implications in neuroscience we review in the discussion.Comment: Updated to correct an erroneous suggestion of extension of the results in Appendix B, and to clarify some measurability questions in the proof of Theorem

    Front propagation in stochastic neural fields

    Get PDF
    We analyse the effects of extrinsic multiplicative noise on front propagation in a scalar neural field with excitatory connections. Using a separation of time scales, we represent the fluctuating front in terms of a diffusive–like displacement (wandering) of the front from its uniformly translating position at long time scales, and fluctuations in the front profile around its instantaneous position at short time scales. One major result of our analysis is a comparison between freely propagating fronts and fronts locked to an externally moving stimulus. We show that the latter are much more robust to noise, since the stochastic wandering of the mean front profile is described by an Ornstein–Uhlenbeck process rather than a Wiener process, so that the variance in front position saturates in the long time limit rather than increasing linearly with time. Finally, we consider a stochastic neural field that supports a pulled front in the deterministic limit, and show that the wandering of such a front is now subdiffusive

    Neural Networks: Implementations and Applications

    Get PDF
    Artificial neural networks, also called neural networks, have been used successfully in many fields including engineering, science and business. This paper presents the implementation of several neural network simulators and their applications in character recognition and other engineering area

    Local/global analysis of the stationary solutions of some neural field equations

    Full text link
    Neural or cortical fields are continuous assemblies of mesoscopic models, also called neural masses, of neural populations that are fundamental in the modeling of macroscopic parts of the brain. Neural fields are described by nonlinear integro-differential equations. The solutions of these equations represent the state of activity of these populations when submitted to inputs from neighbouring brain areas. Understanding the properties of these solutions is essential in advancing our understanding of the brain. In this paper we study the dependency of the stationary solutions of the neural fields equations with respect to the stiffness of the nonlinearity and the contrast of the external inputs. This is done by using degree theory and bifurcation theory in the context of functional, in particular infinite dimensional, spaces. The joint use of these two theories allows us to make new detailed predictions about the global and local behaviours of the solutions. We also provide a generic finite dimensional approximation of these equations which allows us to study in great details two models. The first model is a neural mass model of a cortical hypercolumn of orientation sensitive neurons, the ring model. The second model is a general neural field model where the spatial connectivity isdescribed by heterogeneous Gaussian-like functions.Comment: 38 pages, 9 figure

    Central auditory neurons have composite receptive fields

    Get PDF
    High-level neurons processing complex, behaviorally relevant signals are sensitive to conjunctions of features. Characterizing the receptive fields of such neurons is difficult with standard statistical tools, however, and the principles governing their organization remain poorly understood. Here, we demonstrate multiple distinct receptive-field features in individual high-level auditory neurons in a songbird, European starling, in response to natural vocal signals (songs). We then show that receptive fields with similar characteristics can be reproduced by an unsupervised neural network trained to represent starling songs with a single learning rule that enforces sparseness and divisive normalization. We conclude that central auditory neurons have composite receptive fields that can arise through a combination of sparseness and normalization in neural circuits. Our results, along with descriptions of random, discontinuous receptive fields in the central olfactory neurons in mammals and insects, suggest general principles of neural computation across sensory systems and animal classes

    Neural Network Applications

    Get PDF
    Artificial neural networks, also called neural networks, have been used successfully in many fields including engineering, science and business. This paper presents the implementation of several neural network simulators and their applications in character recognition and other engineering area

    Foundations and modelling of dynamic networks using Dynamic Graph Neural Networks: A survey

    Full text link
    Dynamic networks are used in a wide range of fields, including social network analysis, recommender systems, and epidemiology. Representing complex networks as structures changing over time allow network models to leverage not only structural but also temporal patterns. However, as dynamic network literature stems from diverse fields and makes use of inconsistent terminology, it is challenging to navigate. Meanwhile, graph neural networks (GNNs) have gained a lot of attention in recent years for their ability to perform well on a range of network science tasks, such as link prediction and node classification. Despite the popularity of graph neural networks and the proven benefits of dynamic network models, there has been little focus on graph neural networks for dynamic networks. To address the challenges resulting from the fact that this research crosses diverse fields as well as to survey dynamic graph neural networks, this work is split into two main parts. First, to address the ambiguity of the dynamic network terminology we establish a foundation of dynamic networks with consistent, detailed terminology and notation. Second, we present a comprehensive survey of dynamic graph neural network models using the proposed terminologyComment: 28 pages, 9 figures, 8 table
    • …
    corecore