1,790 research outputs found

    Model-free reconstruction of neuronal network connectivity from calcium imaging signals

    Get PDF
    A systematic assessment of global neural network connectivity through direct electrophysiological assays has remained technically unfeasible even in dissociated neuronal cultures. We introduce an improved algorithmic approach based on Transfer Entropy to reconstruct approximations to network structural connectivities from network activity monitored through calcium fluorescence imaging. Based on information theory, our method requires no prior assumptions on the statistics of neuronal firing and neuronal connections. The performance of our algorithm is benchmarked on surrogate time-series of calcium fluorescence generated by the simulated dynamics of a network with known ground-truth topology. We find that the effective network topology revealed by Transfer Entropy depends qualitatively on the time-dependent dynamic state of the network (e.g., bursting or non-bursting). We thus demonstrate how conditioning with respect to the global mean activity improves the performance of our method. [...] Compared to other reconstruction strategies such as cross-correlation or Granger Causality methods, our method based on improved Transfer Entropy is remarkably more accurate. In particular, it provides a good reconstruction of the network clustering coefficient, allowing to discriminate between weakly or strongly clustered topologies, whereas on the other hand an approach based on cross-correlations would invariantly detect artificially high levels of clustering. Finally, we present the applicability of our method to real recordings of in vitro cortical cultures. We demonstrate that these networks are characterized by an elevated level of clustering compared to a random graph (although not extreme) and by a markedly non-local connectivity.Comment: 54 pages, 8 figures (+9 supplementary figures), 1 table; submitted for publicatio

    Elemental Spiking Neuron Model for Reproducing Diverse Firing Patterns and Predicting Precise Firing Times

    Get PDF
    In simulating realistic neuronal circuitry composed of diverse types of neurons, we need an elemental spiking neuron model that is capable of not only quantitatively reproducing spike times of biological neurons given in vivo-like fluctuating inputs, but also qualitatively representing a variety of firing responses to transient current inputs. Simplistic models based on leaky integrate-and-fire mechanisms have demonstrated the ability to adapt to biological neurons. In particular, the multi-timescale adaptive threshold (MAT) model reproduces and predicts precise spike times of regular-spiking, intrinsic-bursting, and fast-spiking neurons, under any fluctuating current; however, this model is incapable of reproducing such specific firing responses as inhibitory rebound spiking and resonate spiking. In this paper, we augment the MAT model by adding a voltage dependency term to the adaptive threshold so that the model can exhibit the full variety of firing responses to various transient current pulses while maintaining the high adaptability inherent in the original MAT model. Furthermore, with this addition, our model is actually able to better predict spike times. Despite the augmentation, the model has only four free parameters and is implementable in an efficient algorithm for large-scale simulation due to its linearity, serving as an element neuron model in the simulation of realistic neuronal circuitry

    Advances in point process filters and their application to sympathetic neural activity

    Get PDF
    This thesis is concerned with the development of techniques for analyzing the sequences of stereotypical electrical impulses within neurons known as spikes. Sequences of spikes, also called spike trains, transmit neural information; decoding them often provides details about the physiological processes generating the neural activity. Here, the statistical theory of event arrivals, called point processes, is applied to human muscle sympathetic spike trains, a peripheral nerve signal responsible for cardiovascular regulation. A novel technique that uses observed spike trains to dynamically derive information about the physiological processes generating them is also introduced. Despite the emerging usage of individual spikes in the analysis of human muscle sympathetic nerve activity, the majority of studies in this field remain focused on bursts of activity at or below cardiac rhythm frequencies. Point process theory applied to multi-neuron spike trains captured both fast and slow spiking rhythms. First, analysis of high-frequency spiking patterns within cardiac cycles was performed and, surprisingly, revealed fibers with no cardiac rhythmicity. Modeling spikes as a function of average firing rates showed that individual nerves contribute substantially to the differences in the sympathetic stressor response across experimental conditions. Subsequent investigation of low-frequency spiking identified two physiologically relevant frequency bands, and modeling spike trains as a function of hemodynamic variables uncovered complex associations between spiking activity and biophysical covariates at these two frequencies. For example, exercise-induced neural activation enhances the relationship of spikes to respiration but does not affect the extremely precise alignment of spikes to diastolic blood pressure. Additionally, a novel method of utilizing point process observations to estimate an internal state process with partially linear dynamics was introduced. Separation of the linear components of the process model and reduction of the sampled space dimensionality improved the computational efficiency of the estimator. The method was tested on an established biophysical model by concurrently computing the dynamic electrical currents of a simulated neuron and estimating its conductance properties. Computational load reduction, improved accuracy, and applicability outside neuroscience establish the new technique as a valuable tool for decoding large dynamical systems with linear substructure and point process observations

    Functional identification of biological neural networks using reservoir adaptation for point processes

    Get PDF
    The complexity of biological neural networks does not allow to directly relate their biophysical properties to the dynamics of their electrical activity. We present a reservoir computing approach for functionally identifying a biological neural network, i.e. for building an artificial system that is functionally equivalent to the reference biological network. Employing feed-forward and recurrent networks with fading memory, i.e. reservoirs, we propose a point process based learning algorithm to train the internal parameters of the reservoir and the connectivity between the reservoir and the memoryless readout neurons. Specifically, the model is an Echo State Network (ESN) with leaky integrator neurons, whose individual leakage time constants are also adapted. The proposed ESN algorithm learns a predictive model of stimulus-response relations in in vitro and simulated networks, i.e. it models their response dynamics. Receiver Operating Characteristic (ROC) curve analysis indicates that these ESNs can imitate the response signal of a reference biological network. Reservoir adaptation improved the performance of an ESN over readout-only training methods in many cases. This also held for adaptive feed-forward reservoirs, which had no recurrent dynamics. We demonstrate the predictive power of these ESNs on various tasks with cultured and simulated biological neural networks

    Emergence of assortative mixing between clusters of cultured neurons

    Get PDF
    The analysis of the activity of neuronal cultures is considered to be a good proxy of the functional connectivity of in vivo neuronal tissues. Thus, the functional complex network inferred from activity patterns is a promising way to unravel the interplay between structure and functionality of neuronal systems. Here, we monitor the spontaneous self-sustained dynamics in neuronal cultures formed by interconnected aggregates of neurons (clusters). Dynamics is characterized by the fast activation of groups of clusters in sequences termed bursts. The analysis of the time delays between clusters' activations within the bursts allows the reconstruction of the directed functional connectivity of the network. We propose a method to statistically infer this connectivity and analyze the resulting properties of the associated complex networks. Surprisingly enough, in contrast to what has been reported for many biological networks, the clustered neuronal cultures present assortative mixing connectivity values, meaning that there is a preference for clusters to link to other clusters that share similar functional connectivity, as well as a rich-club core, which shapes a"connectivity backbone" in the network. These results point out that the grouping of neurons and the assortative connectivity between clusters are intrinsic survival mechanisms of the culture

    Dynamics of embodied dissociated cortical cultures for the control of hybrid biological robots.

    Get PDF
    The thesis presents a new paradigm for studying the importance of interactions between an organism and its environment using a combination of biology and technology: embodying cultured cortical neurons via robotics. From this platform, explanations of the emergent neural network properties leading to cognition are sought through detailed electrical observation of neural activity. By growing the networks of neurons and glia over multi-electrode arrays (MEA), which can be used to both stimulate and record the activity of multiple neurons in parallel over months, a long-term real-time 2-way communication with the neural network becomes possible. A better understanding of the processes leading to biological cognition can, in turn, facilitate progress in understanding neural pathologies, designing neural prosthetics, and creating fundamentally different types of artificial cognition. Here, methods were first developed to reliably induce and detect neural plasticity using MEAs. This knowledge was then applied to construct sensory-motor mappings and training algorithms that produced adaptive goal-directed behavior. To paraphrase the results, most any stimulation could induce neural plasticity, while the inclusion of temporal and/or spatial information about neural activity was needed to identify plasticity. Interestingly, the plasticity of action potential propagation in axons was observed. This is a notion counter to the dominant theories of neural plasticity that focus on synaptic efficacies and is suggestive of a vast and novel computational mechanism for learning and memory in the brain. Adaptive goal-directed behavior was achieved by using patterned training stimuli, contingent on behavioral performance, to sculpt the network into behaviorally appropriate functional states: network plasticity was not only induced, but could be customized. Clinically, understanding the relationships between electrical stimulation, neural activity, and the functional expression of neural plasticity could assist neuro-rehabilitation and the design of neuroprosthetics. In a broader context, the networks were also embodied with a robotic drawing machine exhibited in galleries throughout the world. This provided a forum to educate the public and critically discuss neuroscience, robotics, neural interfaces, cybernetics, bio-art, and the ethics of biotechnology.Ph.D.Committee Chair: Steve M. Potter; Committee Member: Eric Schumacher; Committee Member: Robert J. Butera; Committee Member: Stephan P. DeWeerth; Committee Member: Thomas D. DeMars
    corecore