9,600 research outputs found

    Action potential energy efficiency varies among neuron types in vertebrates and invertebrates.

    Get PDF
    The initiation and propagation of action potentials (APs) places high demands on the energetic resources of neural tissue. Each AP forces ATP-driven ion pumps to work harder to restore the ionic concentration gradients, thus consuming more energy. Here, we ask whether the ionic currents underlying the AP can be predicted theoretically from the principle of minimum energy consumption. A long-held supposition that APs are energetically wasteful, based on theoretical analysis of the squid giant axon AP, has recently been overturned by studies that measured the currents contributing to the AP in several mammalian neurons. In the single compartment models studied here, AP energy consumption varies greatly among vertebrate and invertebrate neurons, with several mammalian neuron models using close to the capacitive minimum of energy needed. Strikingly, energy consumption can increase by more than ten-fold simply by changing the overlap of the Na+ and K+ currents during the AP without changing the APs shape. As a consequence, the height and width of the AP are poor predictors of energy consumption. In the Hodgkin–Huxley model of the squid axon, optimizing the kinetics or number of Na+ and K+ channels can whittle down the number of ATP molecules needed for each AP by a factor of four. In contrast to the squid AP, the temporal profile of the currents underlying APs of some mammalian neurons are nearly perfectly matched to the optimized properties of ionic conductances so as to minimize the ATP cost

    Probing the dynamics of identified neurons with a data-driven modeling approach

    Get PDF
    In controlling animal behavior the nervous system has to perform within the operational limits set by the requirements of each specific behavior. The implications for the corresponding range of suitable network, single neuron, and ion channel properties have remained elusive. In this article we approach the question of how well-constrained properties of neuronal systems may be on the neuronal level. We used large data sets of the activity of isolated invertebrate identified cells and built an accurate conductance-based model for this cell type using customized automated parameter estimation techniques. By direct inspection of the data we found that the variability of the neurons is larger when they are isolated from the circuit than when in the intact system. Furthermore, the responses of the neurons to perturbations appear to be more consistent than their autonomous behavior under stationary conditions. In the developed model, the constraints on different parameters that enforce appropriate model dynamics vary widely from some very tightly controlled parameters to others that are almost arbitrary. The model also allows predictions for the effect of blocking selected ionic currents and to prove that the origin of irregular dynamics in the neuron model is proper chaoticity and that this chaoticity is typical in an appropriate sense. Our results indicate that data driven models are useful tools for the in-depth analysis of neuronal dynamics. The better consistency of responses to perturbations, in the real neurons as well as in the model, suggests a paradigm shift away from measuring autonomous dynamics alone towards protocols of controlled perturbations. Our predictions for the impact of channel blockers on the neuronal dynamics and the proof of chaoticity underscore the wide scope of our approach

    Quantitative Genetics and Functional-Structural Plant Growth Models: Simulation of Quantitative Trait Loci Detection for Model Parameters and Application to Potential Yield Optimization

    Full text link
    Background and Aims: Prediction of phenotypic traits from new genotypes under untested environmental conditions is crucial to build simulations of breeding strategies to improve target traits. Although the plant response to environmental stresses is characterized by both architectural and functional plasticity, recent attempts to integrate biological knowledge into genetics models have mainly concerned specific physiological processes or crop models without architecture, and thus may prove limited when studying genotype x environment interactions. Consequently, this paper presents a simulation study introducing genetics into a functional-structural growth model, which gives access to more fundamental traits for quantitative trait loci (QTL) detection and thus to promising tools for yield optimization. Methods: The GreenLab model was selected as a reasonable choice to link growth model parameters to QTL. Virtual genes and virtual chromosomes were defined to build a simple genetic model that drove the settings of the species-specific parameters of the model. The QTL Cartographer software was used to study QTL detection of simulated plant traits. A genetic algorithm was implemented to define the ideotype for yield maximization based on the model parameters and the associated allelic combination. Key Results and Conclusions: By keeping the environmental factors constant and using a virtual population with a large number of individuals generated by a Mendelian genetic model, results for an ideal case could be simulated. Virtual QTL detection was compared in the case of phenotypic traits - such as cob weight - and when traits were model parameters, and was found to be more accurate in the latter case. The practical interest of this approach is illustrated by calculating the parameters (and the corresponding genotype) associated with yield optimization of a GreenLab maize model. The paper discusses the potentials of GreenLab to represent environment x genotype interactions, in particular through its main state variable, the ratio of biomass supply over demand

    Automated optimization of a reduced layer 5 pyramidal cell model based on experimental data.

    Get PDF
    The construction of compartmental models of neurons involves tuning a set of parameters to make the model neuron behave as realistically as possible. While the parameter space of single-compartment models or other simple models can be exhaustively searched, the introduction of dendritic geometry causes the number of parameters to balloon. As parameter tuning is a daunting and time-consuming task when performed manually, reliable methods for automatically optimizing compartmental models are desperately needed, as only optimized models can capture the behavior of real neurons. Here we present a three-step strategy to automatically build reduced models of layer 5 pyramidal neurons that closely reproduce experimental data. First, we reduce the pattern of dendritic branches of a detailed model to a set of equivalent primary dendrites. Second, the ion channel densities are estimated using a multi-objective optimization strategy to fit the voltage trace recorded under two conditions - with and without the apical dendrite occluded by pinching. Finally, we tune dendritic calcium channel parameters to model the initiation of dendritic calcium spikes and the coupling between soma and dendrite. More generally, this new method can be applied to construct families of models of different neuron types, with applications ranging from the study of information processing in single neurons to realistic simulations of large-scale network dynamics

    A 3D discrete model of the diaphragm and human trunk

    Full text link
    In this paper, a 3D discrete model is presented to model the movements of the trunk during breathing. In this model, objects are represented by physical particles on their contours. A simple notion of force generated by a linear actuator allows the model to create forces on each particle by way of a geometrical attractor. Tissue elasticity and contractility are modeled by local shape memory and muscular fibers attractors. A specific dynamic MRI study was used to build a simple trunk model comprised of by three compartments: lungs, diaphragm and abdomen. This model was registered on the real geometry. Simulation results were compared qualitatively as well as quantitatively to the experimental data, in terms of volume and geometry. A good correlation was obtained between the model and the real data. Thanks to this model, pathology such as hemidiaphragm paralysis can also be simulated.Comment: published in: "Lung Modelling", France (2006

    Cardiac cell modelling: Observations from the heart of the cardiac physiome project

    Get PDF
    In this manuscript we review the state of cardiac cell modelling in the context of international initiatives such as the IUPS Physiome and Virtual Physiological Human Projects, which aim to integrate computational models across scales and physics. In particular we focus on the relationship between experimental data and model parameterisation across a range of model types and cellular physiological systems. Finally, in the context of parameter identification and model reuse within the Cardiac Physiome, we suggest some future priority areas for this field

    Neuronal Synchronization Can Control the Energy Efficiency of Inter-Spike Interval Coding

    Get PDF
    The role of synchronous firing in sensory coding and cognition remains controversial. While studies, focusing on its mechanistic consequences in attentional tasks, suggest that synchronization dynamically boosts sensory processing, others failed to find significant synchronization levels in such tasks. We attempt to understand both lines of evidence within a coherent theoretical framework. We conceptualize synchronization as an independent control parameter to study how the postsynaptic neuron transmits the average firing activity of a presynaptic population, in the presence of synchronization. We apply the Berger-Levy theory of energy efficient information transmission to interpret simulations of a Hodgkin-Huxley-type postsynaptic neuron model, where we varied the firing rate and synchronization level in the presynaptic population independently. We find that for a fixed presynaptic firing rate the simulated postsynaptic interspike interval distribution depends on the synchronization level and is well-described by a generalized extreme value distribution. For synchronization levels of 15% to 50%, we find that the optimal distribution of presynaptic firing rate, maximizing the mutual information per unit cost, is maximized at ~30% synchronization level. These results suggest that the statistics and energy efficiency of neuronal communication channels, through which the input rate is communicated, can be dynamically adapted by the synchronization level.Comment: 47 pages, 14 figures, 2 Table
    corecore