34,517 research outputs found

    Generalized cable formalism to calculate the magnetic field of single neurons and neuronal populations

    Full text link
    Neurons generate magnetic fields which can be recorded with macroscopic techniques such as magneto-encephalography. The theory that accounts for the genesis of neuronal magnetic fields involves dendritic cable structures in homogeneous resistive extracellular media. Here, we generalize this model by considering dendritic cables in extracellular media with arbitrarily complex electric properties. This method is based on a multi-scale mean-field theory where the neuron is considered in interaction with a "mean" extracellular medium (characterized by a specific impedance). We first show that, as expected, the generalized cable equation and the standard cable generate magnetic fields that mostly depend on the axial current in the cable, with a moderate contribution of extracellular currents. Less expected, we also show that the nature of the extracellular and intracellular media influence the axial current, and thus also influence neuronal magnetic fields. We illustrate these properties by numerical simulations and suggest experiments to test these findings.Comment: Physical Review E (in press); 24 pages, 16 figure

    Formal Modeling of Connectionism using Concurrency Theory, an Approach Based on Automata and Model Checking

    Get PDF
    This paper illustrates a framework for applying formal methods techniques, which are symbolic in nature, to specifying and verifying neural networks, which are sub-symbolic in nature. The paper describes a communicating automata [Bowman & Gomez, 2006] model of neural networks. We also implement the model using timed automata [Alur & Dill, 1994] and then undertake a verification of these models using the model checker Uppaal [Pettersson, 2000] in order to evaluate the performance of learning algorithms. This paper also presents discussion of a number of broad issues concerning cognitive neuroscience and the debate as to whether symbolic processing or connectionism is a suitable representation of cognitive systems. Additionally, the issue of integrating symbolic techniques, such as formal methods, with complex neural networks is discussed. We then argue that symbolic verifications may give theoretically well-founded ways to evaluate and justify neural learning systems in the field of both theoretical research and real world applications

    Phase synchronization of coupled bursting neurons and the generalized Kuramoto model

    Full text link
    Bursting neurons fire rapid sequences of action potential spikes followed by a quiescent period. The basic dynamical mechanism of bursting is the slow currents that modulate a fast spiking activity caused by rapid ionic currents. Minimal models of bursting neurons must include both effects. We considered one of these models and its relation with a generalized Kuramoto model, thanks to the definition of a geometrical phase for bursting and a corresponding frequency. We considered neuronal networks with different connection topologies and investigated the transition from a non-synchronized to a partially phase-synchronized state as the coupling strength is varied. The numerically determined critical coupling strength value for this transition to occur is compared with theoretical results valid for the generalized Kuramoto model.Comment: 31 pages, 5 figure

    Theoretical Properties of Projection Based Multilayer Perceptrons with Functional Inputs

    Get PDF
    Many real world data are sampled functions. As shown by Functional Data Analysis (FDA) methods, spectra, time series, images, gesture recognition data, etc. can be processed more efficiently if their functional nature is taken into account during the data analysis process. This is done by extending standard data analysis methods so that they can apply to functional inputs. A general way to achieve this goal is to compute projections of the functional data onto a finite dimensional sub-space of the functional space. The coordinates of the data on a basis of this sub-space provide standard vector representations of the functions. The obtained vectors can be processed by any standard method. In our previous work, this general approach has been used to define projection based Multilayer Perceptrons (MLPs) with functional inputs. We study in this paper important theoretical properties of the proposed model. We show in particular that MLPs with functional inputs are universal approximators: they can approximate to arbitrary accuracy any continuous mapping from a compact sub-space of a functional space to R. Moreover, we provide a consistency result that shows that any mapping from a functional space to R can be learned thanks to examples by a projection based MLP: the generalization mean square error of the MLP decreases to the smallest possible mean square error on the data when the number of examples goes to infinity
    • …
    corecore