501 research outputs found

    Automatic Construction of Predictive Neuron Models through Large Scale Assimilation of Electrophysiological Data.

    Get PDF
    We report on the construction of neuron models by assimilating electrophysiological data with large-scale constrained nonlinear optimization. The method implements interior point line parameter search to determine parameters from the responses to intracellular current injections of zebra finch HVC neurons. We incorporated these parameters into a nine ionic channel conductance model to obtain completed models which we then use to predict the state of the neuron under arbitrary current stimulation. Each model was validated by successfully predicting the dynamics of the membrane potential induced by 20-50 different current protocols. The dispersion of parameters extracted from different assimilation windows was studied. Differences in constraints from current protocols, stochastic variability in neuron output, and noise behave as a residual temperature which broadens the global minimum of the objective function to an ellipsoid domain whose principal axes follow an exponentially decaying distribution. The maximum likelihood expectation of extracted parameters was found to provide an excellent approximation of the global minimum and yields highly consistent kinetics for both neurons studied. Large scale assimilation absorbs the intrinsic variability of electrophysiological data over wide assimilation windows. It builds models in an automatic manner treating all data as equal quantities and requiring minimal additional insight

    Nonlinear Dynamics of Neural Circuits

    Get PDF

    Distributed online estimation of biophysical neural networks

    Full text link
    In this work, we propose a distributed adaptive observer for a class of networked systems inspired by biophysical conductance-based neural network models. Neural systems learn by adjusting intrinsic and synaptic weights in a distributed fashion, with neuronal membrane voltages carrying information from neighbouring neurons in the network. Using contraction analysis, we show that this learning principle can be used to design an adaptive observer based on a decentralized learning rule that greatly reduces the number of observer states required for consistent exponential convergence of parameter estimates. This novel design is relevant for biological, biomedical and neuromorphic applications

    Doctor of Philosophy

    Get PDF
    dissertationThe goal of this work is to construct a simulation toolset for studying and improving neuroprosthetic devices for restoring neural functionality to patients with neural disorders or diseases. This involves the construction and validation of coupled electromagnetic-neural computational models of retina and hippocampus, compiling knowledge from a broad multidisciplinary background into a single computational platform, with features specific to implant electronics, bulk tissue, cellular and neural network behavior, and diseased tissue. The application of a retina prosthetic device for restoring partial vision to patients blinded by degenerative diseases was first considered. This began with the conceptualization of the retina model, translating features of a connectome, implant electronics, and medical images into a computational model that was "degenerated." It was then applied to the design of novel electrode geometries towards increasing the resolution of induced visual percept, and of stimulation waveform shapes for increasing control of induced neural activity in diseased retina. Throughout this process, features of the simulation toolset itself were modified to increase the precision of the results, leading to a novel method for computing effective bulk resistivity for use in such multiscale modeling. This simulation strategy was then extended to the application of a hippocampus prosthetic device, which has been proposed to restore and/or enhance memory in patients with memory disorders such as Alzheimer's disease or dementia. Using this multiscale modeling approach, we are able to provide recommendations for electrode geometry, placement, and stimulation magnitude for increased safety and efficacy in future experimental trials. In attempt to model neural activity in dense hippocampal tissue, a simulation platform for considering the effects the electrical activity of neural networks have on the extracellular electric field, and therefore have on their neighboring cells, was constructed, further increasing the predictive ability of the proposed methodology for modeling electrical stimulation of neural tissue

    Spike-Threshold Adaptation Predicted by Membrane Potential Dynamics In Vivo

    Get PDF
    International audienceNeurons encode information in sequences of spikes, which are triggered when their membrane potential crosses a threshold. In vivo, the spiking threshold displays large variability suggesting that threshold dynamics have a profound influence on how the combined input of a neuron is encoded in the spiking. Threshold variability could be explained by adaptation to the membrane potential. However, it could also be the case that most threshold variability reflects noise and processes other than threshold adaptation. Here, we investigated threshold variation in auditory neurons responses recorded in vivo in barn owls. We found that spike threshold is quantitatively predicted by a model in which the threshold adapts, tracking the membrane potential at a short timescale. As a result, in these neurons, slow voltage fluctuations do not contribute to spiking because they are filtered by threshold adaptation. More importantly, these neurons can only respond to input spikes arriving together on a millisecond timescale. These results demonstrate that fast adaptation to the membrane potential captures spike threshold variability in vivo

    Training deep neural density estimators to identify mechanistic models of neural dynamics

    Get PDF
    Mechanistic modeling in neuroscience aims to explain observed phenomena in terms of underlying causes. However, determining which model parameters agree with complex and stochastic neural data presents a significant challenge. We address this challenge with a machine learning tool which uses deep neural density estimators-- trained using model simulations-- to carry out Bayesian inference and retrieve the full space of parameters compatible with raw data or selected data features. Our method is scalable in parameters and data features, and can rapidly analyze new data after initial training. We demonstrate the power and flexibility of our approach on receptive fields, ion channels, and Hodgkin-Huxley models. We also characterize the space of circuit configurations giving rise to rhythmic activity in the crustacean stomatogastric ganglion, and use these results to derive hypotheses for underlying compensation mechanisms. Our approach will help close the gap between data-driven and theory-driven models of neural dynamics
    corecore