19 research outputs found

    The Theoretical Foundation of Dendritic Function

    Get PDF
    This collection of fifteen previously published papers, some of them not widely available, have been carefully chosen and annotated by Rall's colleagues and other leading neuroscientists.Wilfrid Rall was a pioneer in establishing the integrative functions of neuronal dendrites that have provided a foundation for neurobiology in general and computational neuroscience in particular. This collection of fifteen previously published papers, some of them not widely available, have been carefully chosen and annotated by Rall's colleagues and other leading neuroscientists. It brings together Rall's work over more than forty years, including his first papers extending cable theory to complex dendritic trees, his ground-breaking paper introducing compartmental analysis to computational neuroscience, and his studies of synaptic integration in motoneurons, dendrodendritic interactions, plasticity of dendritic spines, and active dendritic properties. Today it is well known that the brain's synaptic information is processed mostly in the dendrites where many of the plastic changes underlying learning and memory take place. It is particularly timely to look again at the work of a major creator of the field, to appreciate where things started and where they have led, and to correct any misinterpretations of Rall's work. The editors' introduction highlights the major insights that were gained from Rall's studies as well as from those of his collaborators and followers. It asks the questions that Rall proposed during his scientific career and briefly summarizes the answers.The papers include commentaries by Milton Brightman, Robert E. Burke, William R. Holmes, Donald R. Humphrey, Julian J. B. Jack, John Miller, Stephen Redman, John Rinzel, Idan Segev, Gordon M. Shepherd, and Charles Wilson

    Neuronal computation on complex dendritic morphologies

    Get PDF
    When we think about neural cells, we immediately recall the wealth of electrical behaviour which, eventually, brings about consciousness. Hidden deep in the frequencies and timings of action potentials, in subthreshold oscillations, and in the cooperation of tens of billions of neurons, are synchronicities and emergent behaviours that result in high-level, system-wide properties such as thought and cognition. However, neurons are even more remarkable for their elaborate morphologies, unique among biological cells. The principal, and most striking, component of neuronal morphologies is the dendritic tree. Despite comprising the vast majority of the surface area and volume of a neuron, dendrites are often neglected in many neuron models, due to their sheer complexity. The vast array of dendritic geometries, combined with heterogeneous properties of the cell membrane, continue to challenge scientists in predicting neuronal input-output relationships, even in the case of subthreshold dendritic currents. In this thesis, we will explore the properties of neuronal dendritic trees, and how they alter and integrate the electrical signals that diffuse along them. After an introduction to neural cell biology and membrane biophysics, we will review Abbott's dendritic path integral in detail, and derive the theoretical convergence of its infinite sum solution. On certain symmetric structures, closed-form solutions will be found; for arbitrary geometries, we will propose algorithms using various heuristics for constructing the solution, and assess their computational convergences on real neuronal morphologies. We will demonstrate how generating terms for the path integral solution in an order that optimises convergence is non-trivial, and how a computationally-significant number of terms is required for reasonable accuracy. We will, however, derive a highly-efficient and accurate algorithm for application to discretised dendritic trees. Finally, a modular method for constructing a solution in the Laplace domain will be developed

    Temporal and spatial factors affecting synaptic transmission in cortex

    Get PDF
    Synaptic transmission in cortex depends on both the history of synaptic activity and the location of individual anatomical contacts within the dendritic tree. This thesis analyses key aspects of the roles of both these factors and, in particular, extends many of the results for deterministic synaptic transmission to a more naturalistic stochastic framework. Firstly, I consider how correlations in neurotransmitter vesicle occupancy arising from synchronous activity in a presynaptic population interact with the number of independent release sites, a parameter recently shown to be modified during long-term plasticity. I study a model of multiple-release-site short-term plasticity and derive exact results for the postsynaptic voltage variance. Using approximate results for the postsynaptic firing rate in the limits of low and high correlations, I demonstrate that short-term depression leads to a maximum response for an intermediate number of presynaptic release sites, and that this in turn leads to a tuning-curve response peaked at an optimal presynaptic synchrony set by the number of neurotransmitter release sites per presynaptic neuron. As the nervous system operates under constraints of efficient metabolism it is likely that this phenomenon provides an activity-dependent constraint on network architecture. Secondly, I consider how synapses exhibiting short-term plasticity transmit spike trains when spike times are autocorrelated. I derive exact results for vesicle occupancy and postsynaptic voltage variance in the case that spiking is a renewal process, with uncorrelated interspike intervals (ISIs). The vesicle occupancy predictions are tested experimentally and shown to be in good agreement with the theory. I demonstrate that neurotransmitter is released at a higher rate when the presynaptic spike train is more regular, but that positively autocorrelated spike trains are better drivers of the postsynaptic voltage when the vesicle release probability is low. I provide accurate approximations to the postsynaptic firing rate, allowing future studies of neuronal circuits and networks with dynamic synapses to incorporate physiologically relevant spiking statistics. Thirdly, I develop a Bayesian inference method for synaptic parameters. This expands on recent Bayesian approaches in that the likelihood function is exact for both the quantal and dynamic synaptic parameters. This means that it can be used to directly estimate parameters for common synaptic models with few release sites. I apply the method to simulated and real data; demonstrating a substantial improvement over analysis techniques that are based around the mean and variance. Finally, I consider a spatially extended neuron model where the dendrites taper away from the soma. I derive an accurate asymptotic solution for the voltage profile in a dendritic cable of arbitrary radius profile and use this to determine the profile that optimally transfers voltages to the soma. I find a precise quadratic form that matches results from non-parametric numerical optimisation. The equation predicts diameter profiles from reconstructed cells, suggesting that dendritic diameters optimise passive transfer of synaptic currents

    Roles of gap junctions in neuronal networks

    Get PDF
    This dissertation studies the roles of gap junctions in the dynamics of neuronal networks in three distinct problems. First, we study the circumstances under which a network of excitable cells coupled by gap junctions exhibits sustained activity. We investigate how network connectivity and refractory length affect the sustainment of activity in an abstract network. Second, we build a mathematical model for gap junctionally coupled cables to understand the voltage response along the cables as a function of cable diameter. For the coupled cables, as cable diameter increases, the electrotonic distance decreases, which cause the voltage to attenuate less, but the input of the second cable decreases, which allows the voltage of the second cable to attenuate more. Thus we show that there exists an optimal diameter for which the voltage amplitude in the second cable is maximized. Third, we investigate the dynamics of two gap-junctionally coupled theta neurons. A single theta neuron model is a canonical form of Type I neural oscillator that yields a very low frequency oscillation. The coupled system also yields a very low frequency oscillation in the sense that the ratio of two cells\u27 spiking frequencies obtains the values from a very small number. Thus the network exhibits several types of solutions including stable suppressed and 1 N spiking solutions. Using phase plane analysis and Denjoy\u27s Theorem, we show the existence of these solutions and investigate some of their properties

    Digital reconstruction, quantitative morphometric analysis, and membrane properties of bipolar cells in the rat retina.

    Get PDF
    A basic principle of neuroscience is that structure reflects function. This has led to numerous attempts to characterize the complete morphology of types of neurons throughout the central nervous system. The ability to acquire and analyze complete neuronal morphologies has advanced with continuous technological developments for over 150 years, with progressive refinements and increased understanding of the precise anatomical details of different types of neurons. Bipolar cells of the mammalian retina are short-range projections neurons that link the outer and inner retina. Their dendrites contact and receive input from the terminals of the light-sensing photoreceptors in the outer plexiform layer and their axons descend through the inner nuclear and inner plexiform layers to stratify at different levels of the inner plexiform layer. The stratification level of the axon terminals of different types of bipolar cells in the inner plexiform layer determines their synaptic connectivity and is an important basis for the morphological classification of these cells. Between 10 and 15 different types of cone bipolar cells have been identified in different species and they can be divided into ON-cone bipolar cells (that depolarize to the onset of light) and OFF-cone bipolar cells (that depolarize to the offset of light). Different types of cone bipolar cells are thought to be responsible for coding and transmitting different features of our visual environment and generating parallel channels that uniquely filter and transform the inputs from the photoreceptors. There is a lack of detailed morphological data for bipolar cells, especially for the rat, where biophysical mechanisms have been most extensively studied. The work presented in this thesis provides the groundwork for the future goal of developing morphologically realistic compartmental models for cone and rod bipolar cells. First, the contribution of gap junctions to the membrane properties, specifically input resistance, of bipolar cells was investigated. Gap junctions are ubiquitous within the retina, but it remains to be determined whether the strength of coupling between specific cell types is sufficiently strong for the cells to be functionally coupled via electrical synapses. There are gap junctions between the same class of bipolar cells, and this appears to be a common circuit motif in the vertebrate retina. Surprisingly, our results suggested that the gap junctions between OFF-cone bipolar cells do not support consequential electrical coupling. This provides an important first step both to elucidate the potential roles for these gap junctions, and also for the development of compartmental models for cone bipolar cells. Second, from image stacks acquired from multiphoton excitation microscopy, quantitative morphological reconstructions and detailed morphological analysis were performed with fluorescent dye-filled cone and rod bipolar cells. Compared to previous descriptions, the extent and complexity of branching of the axon terminals was surprisingly high. By precisely quantifying the level of stratification of the axon terminals in the inner plexiform layer, we have generated a reference system for the reliable classification of individual cells in future studies that are focused on correlating physiological and morphological properties. The workflow that we have implemented can be readily extended to the development of morphologically realistic compartmental models for these neurons.Doktorgradsavhandlin

    Neuronal Signal Modulation By Dendritic Geometry

    Get PDF
    Neurons are the basic units in nervous systems. They transmit signals along neurites and at synapses in electrical and chemical forms. Neuronal morphology, mainly dendritic geometry, is famous for anatomical diversity, and names of many neuronal types reflect their morphologies directly. Dendritic geometries, as well as distributions of ion channels on cell membranes, contribute significantly to distinct behaviours of electrical signal filtration and integration in different neuronal types (even in the cases of receiving identical inputs in vitro). In this thesis I mainly address the importance of dendritic geometry, by studying its effects on electrical signal modulation at the level of single neurons via mathematical and computational approaches. By ‘geometry’, I consider both branching structures of entire dendritic trees and tapered structures of individual dendritic branches. The mathematical model of dendritic membrane potential dynamics is established by generalising classical cable theory. It forms the theoretical benchmark for this thesis to study neuronal signal modulation on dendritic trees with tapered branches. A novel method to obtain analytical response functions in algebraically compact forms on such dendrites is developed. It permits theoretical analysis and accurate and efficient numerical calculation on a neuron as an electrical circuit. By investigating simplified but representative dendritic geometries, it is found that a tapered dendrite amplifies distal signals in comparison to the non-tapered dendrite. This modulation is almost a local effect, which is merely influenced by global dendritic geometry. Nonetheless, global geometry has a stronger impact on signal amplitudes, and even more on signal phases. In addition, the methodology employed in this thesis is perfectly compatible with other existing methods dealing with neuronal stochasticity and active behaviours. Future works of large-scale neural networks can easily adapt this work to improve computational efficiency, while preserving a large amount of biophysical details

    Using the Green's function to simplify and understand dendrites

    Get PDF
    Neurons are endowed with dendrites: tree-like structures that collect and transform inputs. These arborizations are believed to substantially enhance the computational repertoire of neurons. While it has long been known that dendrites are not iso-potential units, only in the last few decades it was shown experimentally that dendritic branches can transform local inputs in a non-linear fashion. This finding led to the subunit hypothesis, which states that within the dendritic tree, inputs arriving in one branch are transformed non-linearly and independently from what happens in other branches. Recent progress in experimental recording techniques shows that this localized dendritic integration contributes to shaping behavior. While it is generally accepted that the dendritic tree induces multiple subunits, many questions remain unanswered. For instance, it is not known how much separation there needs to be between different branches to be able to function as subunits. Consequently, there is no information on how many subunits can coexist along a dendritic arborization. It is also not known what the input-output relation of these subunits would be, or whether these subunits can be modified by input patterns. As a consequence, assessing the effects of dendrites on the workings of networks of neurons remains mere guesswork. During this work, we choose a theory-driven approach to advance our knowledge about dendrites. Theory can help us understand dendrites by deriving accurate, but conceptually simple models of dendrites that still capture their main computational effects. These models can then be analyzed and fully understood, which in turn teaches us how actual dendrites function computationally. Such simple models typically require less computer operations to simulate than highly detailed dendrite models. Hence, they may also increase the speed of network simulations that incorporate dendrites. The Green's function forms the basis for our theory driven approach. We first explored whether it could be used to reduce the cost of simulating dendrite models. One mathematically interesting finding in this regard is that, because this function is defined on a tree graph, the number of equations can be reduced drastically. Nevertheless, we were forced to conclude that reducing dendrites in this way does not yield new information about the subunit hypothesis. We then focused our attention on another way of decomposing the Green's function. We found that the dendrite model obtained in this way reveals much information on the dendritic subunits. In particular, we found that the occurrence of subunits is well predicted by the ratio of input over transfer impedance in dendrites. This allowed us to estimate the number of subunits that can coexist on dendritic trees. We also found that this ratio can be modified by other inputs, in particular shunting conductances, so that the number of subunits on a dendritic tree can be modified dynamically. We finally were able to show that, due to this dynamical increase of the number of subunits, individual branches that would otherwise respond to inputs as a single unit, could become sensitive to different stimulus features. We believe that this model can be implemented in such a way that it simulates dendrites in a highly efficient manner. Thus, after incorporation in standard neural network simulation software, it can substantially improve the accessibility of dendritic network simulations to modelers

    Irregularity in the cortical spike code : noise or information?

    Get PDF
    How random is the discharge pattern of cortical neurons? We examined recordings from primary visual cortex (V1) and extrastriate cortex (MT) of awake, behaving macaque monkey, and compared them to analytical predictions. We measured two indices of firing variability: the ratio of the variance to the mean for the number of action potentials evoked by a constant stimulus, and the rate-normalized Coefficient of Variation (C_v) of the interspike interval distribution. Firing in virtually all V1 and MT neurons was nearly consistent with a completely random process (e.g., C_v ≈ 1). We tried to model this high variability by small, independent, and random EPSPs converging onto a leaky integrate-and-fire neuron (Knight, 1972). Both this and related models predicted very low firing variability ( C_v ≪ 1) for realistic EPSP depolarizations and membrane time constants. We also simulated a biophysically very detailed compartmental model of an anatomically reconstructed and physiologically characterized layer V cat pyramidal cell with passive dendrites and active soma. If independent, excitatory synaptic input fired the model cell at the high rates observed in monkey, the C_v and the variability in the number of spikes were both very low, in agreement with the integrate-and- fire models but in strong disagreement with the majority of our monkey data. The simulated cell only produced highly variable firing when Hodgkin-Huxley- like currents (I_(Na) and very strong I_(DR) were placed on the distal basal dendrites. Now the simulated neuron acted more as a millisecond-resolution detector of dendritic spike coincidences than as a temporal integrator, thereby increasing its bandwidth by an order of magnitude above traditional estimates. This hypothetical submillisecond coincidence detection mainly uses the cell's capacitive localization of very transient signals in thin dendrites. For millisecond-level events, different dendrites in the cell are electrically isolated from one another by dendritic capacitance, so that the cell can contain many independent computational units. This de-coupling occurs because charge takes time to equilibrate inside the cell, and can occur even in the presence of long membrane time constants. Simple approximations using cellular parameters (e.g., R_m, C_m, R_i, G_(Na) etc) can predict many effects of dendritic spiking, as confirmed by detailed compartmental simulations of the reconstructed pyramidal cell. Such expressions allow the extension of simulated results to untested parameter regimes. Coincidence-detection can occur by two methods: (1) Fast charge-equilization inside dendritic branches creates submillisecond EPSPs in those dendrites, so that individual branches can spike in response to coincidences among those fast EPSP's, (2) strong delayed-rectifier currents in dendrites allow the soma to fire only upon the submillisecond coincidence of two or more dendritic spikes. Such fast EPSPs and dendritic spikes produce somatic voltages consistent with intracellular observations. A simple measure of coincidence-detection "effectiveness" shows that cells containing these hypothetical dendritic spikes are far more sensitive to coincident EPSPs than to temporally separated ones, and suggest a conceptual mechanism for fast, parallel, nonlinear computations inside single cells. If a simplified model neuron acts as a coincidence-detector of single pulses, networks of such neurons can solve a simple but important perceptual problem-the "binding problem" -more easily and flexibly than traditional neurons can. In a simple toy model, different classes of coincidence-detecting neurons respond to different aspects of simple visual stimuli, for example shape and motion. The task of the population of neurons is to respond to multiple simultaneous stimuli while still identifying those neurons which respond to a particular stimulus. Because a coincidence-detecting neuron's output spike train retains some very precise information about the timing of its input spikes, all neurons which respond the same stimulus will produce output spikes with an above-random chance of coincidence, and hence will be easily distinguished from neurons responding to a different stimulus. This scheme uses the traditional average-rate code to represent each stimulus separately, while using precise single-spike times to multiplex information about the relation of different aspects of the stimuli to each other: In this manner the model's highly irregular spiking actually reflects information rather than noise.</p

    Channelrhodopsin assisted synapse identity mapping reveals clustering of layer 5 intralaminar inputs

    Get PDF
    corecore