6 research outputs found

    Learning Bayes-optimal dendritic opinion pooling

    Full text link
    In functional network models, neurons are commonly conceptualized as linearly summing presynaptic inputs before applying a non-linear gain function to produce output activity. In contrast, synaptic coupling between neurons in the central nervous system is regulated by dynamic permeabilities of ion channels. So far, the computational role of these membrane conductances remains unclear and is often considered an artifact of the biological substrate. Here we demonstrate that conductance-based synaptic coupling allow neurons to represent, process and learn uncertainties. We suggest that membrane potentials and conductances on dendritic branches code opinions with associated reliabilities. The biophysics of the membrane combines these opinions by taking account their reliabilities, and the soma thus acts as a decision maker. We derive a gradient-based plasticity rule, allowing neurons to learn desired target distributions and weight synaptic inputs by their relative reliabilities. Our theory explains various experimental findings on the system and single-cell level related to multi-sensory integration, and makes testable predictions on dendritic integration and synaptic plasticity.Comment: 36 pages, 10 figures; Mihai A. Petrovici and Walter Senn share senior authorshi

    NMDA-driven dendritic modulation enables multitask representation learning in hierarchical sensory processing pathways.

    Get PDF
    While sensory representations in the brain depend on context, it remains unclear how such modulations are implemented at the biophysical level, and how processing layers further in the hierarchy can extract useful features for each possible contextual state. Here, we demonstrate that dendritic N-Methyl-D-Aspartate spikes can, within physiological constraints, implement contextual modulation of feedforward processing. Such neuron-specific modulations exploit prior knowledge, encoded in stable feedforward weights, to achieve transfer learning across contexts. In a network of biophysically realistic neuron models with context-independent feedforward weights, we show that modulatory inputs to dendritic branches can solve linearly nonseparable learning problems with a Hebbian, error-modulated learning rule. We also demonstrate that local prediction of whether representations originate either from different inputs, or from different contextual modulations of the same input, results in representation learning of hierarchical feedforward weights across processing layers that accommodate a multitude of contexts

    A Sparse Reformulation of the Green's Function Formalism Allows Efficient Simulations of Morphological Neuron Models

    Get PDF
    We prove that when a class of partial differential equations, generalized from the cable equation, is defined on tree graphs and the inputs are restricted to a spatially discrete, well chosen set of points, the Green's function (GF) formalism can be rewritten to scale as O (n) with the number n of inputs locations, contrary to the previously reported O (n(2)) scaling. We show that the linear scaling can be combined with an expansion of the remaining kernels as sums of exponentials to allow efficient simulations of equations from the aforementioned class. We furthermore validate this simulation paradigm on models of nerve cells and explore its relation with more traditional finite difference approaches. Situations in which a gain in computational performance is expected are discussed.Peer reviewedFinal Accepted Versio

    The Green's function formalism as a bridge between single- and multi-compartmental modeling

    Get PDF
    Neurons are spatially extended structures that receive and process inputs on their dendrites. It is generally accepted that neuronal computations arise from the active integration of synaptic inputs along a dendrite between the input location and the location of spike generation in the axon initial segment. However, many application such as simulations of brain networks use point-neurons-neurons without a morphological component-as computational units to keep the conceptual complexity and computational costs low. Inevitably, these applications thus omit a fundamental property of neuronal computation. In this work, we present an approach to model an artificial synapse that mimics dendritic processing without the need to explicitly simulate dendritic dynamics. The model synapse employs an analytic solution for the cable equation to compute the neuron's membrane potential following dendritic inputs. Green's function formalism is used to derive the closed version of the cable equation. We show that by using this synapse model, point-neurons can achieve results that were previously limited to the realms of multi-compartmental models. Moreover, a computational advantage is achieved when only a small number of simulated synapses impinge on a morphologically elaborate neuron. Opportunities and limitations are discussed.Peer reviewe

    NMDA-driven dendritic modulation enables multitask representation learning in hierarchical sensory processing pathways

    No full text
    While sensory representations in the brain depend on context, it remains unclearhow such modulations are implemented at the biophysical level, and how processinglayers further in the hierarchy can extract useful features for each possible contex-tual state. Here, we demonstrate that dendritic N-Methyl-D-Aspartate spikes can,within physiological constraints, implement contextual modulation of feedforwardprocessing. Such neuron-specific modulations exploit prior knowledge, encoded instable feedforward weights, to achieve transfer learning across contexts. In a network ofbiophysically realistic neuron models with context-independent feedforward weights,we show that modulatory inputs to dendritic branches can solve linearly nonseparablelearning problems with a Hebbian, error-modulated learning rule. We also demonstratethat local prediction of whether representations originate either from different inputs,or from different contextual modulations of the same input, results in representationlearning of hierarchical feedforward weights across processing layers that accommodatea multitude of contexts
    corecore