489 research outputs found

    Effective Stimuli for Constructing Reliable Neuron Models

    Get PDF
    The rich dynamical nature of neurons poses major conceptual and technical challenges for unraveling their nonlinear membrane properties. Traditionally, various current waveforms have been injected at the soma to probe neuron dynamics, but the rationale for selecting specific stimuli has never been rigorously justified. The present experimental and theoretical study proposes a novel framework, inspired by learning theory, for objectively selecting the stimuli that best unravel the neuron's dynamics. The efficacy of stimuli is assessed in terms of their ability to constrain the parameter space of biophysically detailed conductance-based models that faithfully replicate the neuron's dynamics as attested by their ability to generalize well to the neuron's response to novel experimental stimuli. We used this framework to evaluate a variety of stimuli in different types of cortical neurons, ages and animals. Despite their simplicity, a set of stimuli consisting of step and ramp current pulses outperforms synaptic-like noisy stimuli in revealing the dynamics of these neurons. The general framework that we propose paves a new way for defining, evaluating and standardizing effective electrical probing of neurons and will thus lay the foundation for a much deeper understanding of the electrical nature of these highly sophisticated and non-linear devices and of the neuronal networks that they compose

    Automated optimization of a reduced layer 5 pyramidal cell model based on experimental data.

    Get PDF
    The construction of compartmental models of neurons involves tuning a set of parameters to make the model neuron behave as realistically as possible. While the parameter space of single-compartment models or other simple models can be exhaustively searched, the introduction of dendritic geometry causes the number of parameters to balloon. As parameter tuning is a daunting and time-consuming task when performed manually, reliable methods for automatically optimizing compartmental models are desperately needed, as only optimized models can capture the behavior of real neurons. Here we present a three-step strategy to automatically build reduced models of layer 5 pyramidal neurons that closely reproduce experimental data. First, we reduce the pattern of dendritic branches of a detailed model to a set of equivalent primary dendrites. Second, the ion channel densities are estimated using a multi-objective optimization strategy to fit the voltage trace recorded under two conditions - with and without the apical dendrite occluded by pinching. Finally, we tune dendritic calcium channel parameters to model the initiation of dendritic calcium spikes and the coupling between soma and dendrite. More generally, this new method can be applied to construct families of models of different neuron types, with applications ranging from the study of information processing in single neurons to realistic simulations of large-scale network dynamics

    Homogeneous and Narrow Bandwidth of Spike Initiation in Rat L1 Cortical Interneurons

    Get PDF
    The cortical layer 1 (L1) contains a population of GABAergic interneurons, considered a key component of information integration, processing, and relaying in neocortical networks. In fact, L1 interneurons combine top\u2013down information with feed-forward sensory inputs in layer 2/3 and 5 pyramidal cells (PCs), while filtering their incoming signals. Despite the importance of L1 for network emerging phenomena, little is known on the dynamics of the spike initiation and the encoding properties of its neurons. Using acute brain tissue slices from the rat neocortex, combined with the analysis of an existing database of model neurons, we investigated the dynamical transfer properties of these cells by sampling an entire population of known \u201celectrical classes\u201d and comparing experiments and model predictions. We found the bandwidth of spike initiation to be significantly narrower than in L2/3 and 5 PCs, with values below 100 cycle/s, but without significant heterogeneity in the cell response properties across distinct electrical types. The upper limit of the neuronal bandwidth was significantly correlated to the mean firing rate, as anticipated from theoretical studies but not reported for PCs. At high spectral frequencies, the magnitude of the neuronal response attenuated as a power-law, with an exponent significantly smaller than what was reported for pyramidal neurons and reminiscent of the dynamics of a \u201cleaky\u201d integrate-and-fire model of spike initiation. Finally, most of our in vitro results matched quantitatively the numerical simulations of the models as a further contribution to independently validate the models against novel experimental data

    Training deep neural density estimators to identify mechanistic models of neural dynamics

    Get PDF
    Mechanistic modeling in neuroscience aims to explain observed phenomena in terms of underlying causes. However, determining which model parameters agree with complex and stochastic neural data presents a significant challenge. We address this challenge with a machine learning tool which uses deep neural density estimators-- trained using model simulations-- to carry out Bayesian inference and retrieve the full space of parameters compatible with raw data or selected data features. Our method is scalable in parameters and data features, and can rapidly analyze new data after initial training. We demonstrate the power and flexibility of our approach on receptive fields, ion channels, and Hodgkin-Huxley models. We also characterize the space of circuit configurations giving rise to rhythmic activity in the crustacean stomatogastric ganglion, and use these results to derive hypotheses for underlying compensation mechanisms. Our approach will help close the gap between data-driven and theory-driven models of neural dynamics

    Multiscale Exploration of Mouse Brain Microstructures Using the Knife-Edge Scanning Microscope Brain Atlas

    Get PDF
    Connectomics is the study of the full connection matrix of the brain. Recent advances in high-throughput, high-resolution 3D microscopy methods have enabled the imaging of whole small animal brains at a sub-micrometer resolution, potentially opening the road to full-blown connectomics research. One of the first such instruments to achieve whole-brain-scale imaging at sub-micrometer resolution is the Knife-Edge Scanning Microscope (KESM). KESM whole-brain data sets now include Golgi (neuronal circuits), Nissl (soma distribution), and India ink (vascular networks). KESM data can contribute greatly to connectomics research, since they fill the gap between lower resolution, large volume imaging methods (such as diffusion MRI) and higher resolution, small volume methods (e.g., serial sectioning electron microscopy). Furthermore, KESM data are by their nature multiscale, ranging from the subcellular to the whole organ scale. Due to this, visualization alone is a huge challenge, before we even start worrying about quantitative connectivity analysis. To solve this issue, we developed a web-based neuroinformatics framework for efficient visualization and analysis of the multiscale KESM data sets. In this paper, we will first provide an overview of KESM, then discuss in detail the KESM data sets and the web-based neuroinformatics framework, which is called the KESM brain atlas (KESMBA). Finally, we will discuss the relevance of the KESMBA to connectomics research, and identify challenges and future directions

    Explainability for Large Language Models: A Survey

    Full text link
    Large language models (LLMs) have demonstrated impressive capabilities in natural language processing. However, their internal mechanisms are still unclear and this lack of transparency poses unwanted risks for downstream applications. Therefore, understanding and explaining these models is crucial for elucidating their behaviors, limitations, and social impacts. In this paper, we introduce a taxonomy of explainability techniques and provide a structured overview of methods for explaining Transformer-based language models. We categorize techniques based on the training paradigms of LLMs: traditional fine-tuning-based paradigm and prompting-based paradigm. For each paradigm, we summarize the goals and dominant approaches for generating local explanations of individual predictions and global explanations of overall model knowledge. We also discuss metrics for evaluating generated explanations, and discuss how explanations can be leveraged to debug models and improve performance. Lastly, we examine key challenges and emerging opportunities for explanation techniques in the era of LLMs in comparison to conventional machine learning models

    Simulation Intelligence: Towards a New Generation of Scientific Methods

    Full text link
    The original "Seven Motifs" set forth a roadmap of essential methods for the field of scientific computing, where a motif is an algorithmic method that captures a pattern of computation and data movement. We present the "Nine Motifs of Simulation Intelligence", a roadmap for the development and integration of the essential algorithms necessary for a merger of scientific computing, scientific simulation, and artificial intelligence. We call this merger simulation intelligence (SI), for short. We argue the motifs of simulation intelligence are interconnected and interdependent, much like the components within the layers of an operating system. Using this metaphor, we explore the nature of each layer of the simulation intelligence operating system stack (SI-stack) and the motifs therein: (1) Multi-physics and multi-scale modeling; (2) Surrogate modeling and emulation; (3) Simulation-based inference; (4) Causal modeling and inference; (5) Agent-based modeling; (6) Probabilistic programming; (7) Differentiable programming; (8) Open-ended optimization; (9) Machine programming. We believe coordinated efforts between motifs offers immense opportunity to accelerate scientific discovery, from solving inverse problems in synthetic biology and climate science, to directing nuclear energy experiments and predicting emergent behavior in socioeconomic settings. We elaborate on each layer of the SI-stack, detailing the state-of-art methods, presenting examples to highlight challenges and opportunities, and advocating for specific ways to advance the motifs and the synergies from their combinations. Advancing and integrating these technologies can enable a robust and efficient hypothesis-simulation-analysis type of scientific method, which we introduce with several use-cases for human-machine teaming and automated science

    Machine Learning Guided Discovery and Design for Inertial Confinement Fusion

    Get PDF
    Inertial confinement fusion (ICF) experiments at the National Ignition Facility (NIF) and their corresponding computer simulations produce an immense amount of rich data. However, quantitatively interpreting that data remains a grand challenge. Design spaces are vast, data volumes are large, and the relationship between models and experiments may be uncertain. We propose using machine learning to aid in the design and understanding of ICF implosions by integrating simulation and experimental data into a common frame-work. We begin by illustrating an early success of this data-driven design approach which resulted in the discovery of a new class of high performing ovoid-shaped implosion simulations. The ovoids achieve robust performance from the generation of zonal flows within the hotspot, revealing physics that had not previously been observed in ICF capsules. The ovoid discovery also revealed deficiencies in common machine learning algorithms for modeling ICF data. To overcome these inadequacies, we developed a novel algorithm, deep jointly-informed neural networks (DJINN), which enables non-data scientists to quickly train neural networks on their own datasets. DJINN is routinely used for modeling data ICF data and for a variety of other applications (uncertainty quantification; climate, nuclear, and atomic physics data). We demonstrate how DJINN is used to perform parameter inference tasks for NIF data, and how transfer learning with DJINN enables us to create predictive models of direct drive experiments at the Omega laser facility. Much of this work focuses on scalar or modest-size vector data, however many ICF diagnostics produce a variety of images, spectra, and sequential data. We end with a brief exploration of sequence-to-sequence models for emulating time-dependent multiphysics systems of varying complexity. This is a first step toward incorporating multimodal time-dependent data into our analyses to better constrain our predictive models
    corecore