249 research outputs found

    Axonal Computations

    Get PDF
    Axons functionally link the somato-dendritic compartment to synaptic terminals. Structurally and functionally diverse, they accomplish a central role in determining the delays and reliability with which neuronal ensembles communicate. By combining their active and passive biophysical properties, they ensure a plethora of physiological computations. In this review, we revisit the biophysics of generation and propagation of electrical signals in the axon and their dynamics. We further place the computational abilities of axons in the context of intracellular and intercellular coupling. We discuss how, by means of sophisticated biophysical mechanisms, axons expand the repertoire of axonal computation, and thereby, of neural computation

    Neuromorphic analogue VLSI

    Get PDF
    Neuromorphic systems emulate the organization and function of nervous systems. They are usually composed of analogue electronic circuits that are fabricated in the complementary metal-oxide-semiconductor (CMOS) medium using very large-scale integration (VLSI) technology. However, these neuromorphic systems are not another kind of digital computer in which abstract neural networks are simulated symbolically in terms of their mathematical behavior. Instead, they directly embody, in the physics of their CMOS circuits, analogues of the physical processes that underlie the computations of neural systems. The significance of neuromorphic systems is that they offer a method of exploring neural computation in a medium whose physical behavior is analogous to that of biological nervous systems and that operates in real time irrespective of size. The implications of this approach are both scientific and practical. The study of neuromorphic systems provides a bridge between levels of understanding. For example, it provides a link between the physical processes of neurons and their computational significance. In addition, the synthesis of neuromorphic systems transposes our knowledge of neuroscience into practical devices that can interact directly with the real world in the same way that biological nervous systems do

    Computational implications of biophysical diversity and multiple timescales in neurons and synapses for circuit performance

    Full text link
    Despite advances in experimental and theoretical neuroscience, we are still trying to identify key biophysical details that are important for characterizing the operation of brain circuits. Biological mechanisms at the level of single neurons and synapses can be combined as ‘building blocks’ to generate circuit function. We focus on the importance of capturing multiple timescales when describing these intrinsic and synaptic components. Whether inherent in the ionic currents, the neuron’s complex morphology, or the neurotransmitter composition of synapses, these multiple timescales prove crucial for capturing the variability and richness of circuit output and enhancing the information-carrying capacity observed across nervous systems

    Probing the dynamics of identified neurons with a data-driven modeling approach

    Get PDF
    In controlling animal behavior the nervous system has to perform within the operational limits set by the requirements of each specific behavior. The implications for the corresponding range of suitable network, single neuron, and ion channel properties have remained elusive. In this article we approach the question of how well-constrained properties of neuronal systems may be on the neuronal level. We used large data sets of the activity of isolated invertebrate identified cells and built an accurate conductance-based model for this cell type using customized automated parameter estimation techniques. By direct inspection of the data we found that the variability of the neurons is larger when they are isolated from the circuit than when in the intact system. Furthermore, the responses of the neurons to perturbations appear to be more consistent than their autonomous behavior under stationary conditions. In the developed model, the constraints on different parameters that enforce appropriate model dynamics vary widely from some very tightly controlled parameters to others that are almost arbitrary. The model also allows predictions for the effect of blocking selected ionic currents and to prove that the origin of irregular dynamics in the neuron model is proper chaoticity and that this chaoticity is typical in an appropriate sense. Our results indicate that data driven models are useful tools for the in-depth analysis of neuronal dynamics. The better consistency of responses to perturbations, in the real neurons as well as in the model, suggests a paradigm shift away from measuring autonomous dynamics alone towards protocols of controlled perturbations. Our predictions for the impact of channel blockers on the neuronal dynamics and the proof of chaoticity underscore the wide scope of our approach

    Mechanistic Information and Causal Continuity

    Get PDF
    Some biological processes (our examples are DNA expression and a reflex response in the leech) move from step to step in a way that cannot be completely understood solely in terms of causes and correlations. This paper develops a notion of mechanistic information that can be used to explain the continuities of such processes. We compare them to processes (including the Krebs cycle) that do not involve information. We compare our conception of mechanistic information to some familiar notions including Crick’s idea of genetic information, Shannon-Weaver information, and Millikan’s biosemantic information

    Fractals in the Nervous System: conceptual Implications for Theoretical Neuroscience

    Get PDF
    This essay is presented with two principal objectives in mind: first, to document the prevalence of fractals at all levels of the nervous system, giving credence to the notion of their functional relevance; and second, to draw attention to the as yet still unresolved issues of the detailed relationships among power law scaling, self-similarity, and self-organized criticality. As regards criticality, I will document that it has become a pivotal reference point in Neurodynamics. Furthermore, I will emphasize the not yet fully appreciated significance of allometric control processes. For dynamic fractals, I will assemble reasons for attributing to them the capacity to adapt task execution to contextual changes across a range of scales. The final Section consists of general reflections on the implications of the reviewed data, and identifies what appear to be issues of fundamental importance for future research in the rapidly evolving topic of this review

    Biophysics of Purkinje computation

    Get PDF
    Although others have reported and characterised different patterns of Purkinje firing (Womack and Khodakhah, 2002, 2003, 2004; McKay and Turner, 2005) this thesis is the first study that moves beyond their description and investigates the actual basis of their generation. Purkinje cells can intrinsically fire action potentials in a repeating trimodal or bimodal pattern. The trimodal pattern consists of tonic spiking, bursting and quiescence. The bimodal pattern consists of tonic spiking and quiescence. How these firing patterns are generated, and what ascertains which firing pattern is selected, has not been determined to date. We have constructed a detailed biophysical Purkinje cell model that can replicate these patterns and which shows that Na+/K+ pump activity sets the model’s operating mode. We propose that Na+/K+ pump modulation switches the Purkinje cell between different firing modes in a physiological setting and so innovatively hypothesise the Na+/K+ pump to be a computational element in Purkinje information coding. We present supporting in vitro Purkinje cell recordings in the presence of ouabain, which irreversibly blocks the Na+/K+ pump. Climbing fiber (CF) input has been shown experimentally to toggle a Purkinje cell between an up (firing) and down (quiescent) state and set the gain of its response to parallel fiber (PF) input (Mckay et al., 2007). Our Purkinje cell model captures these toggle and gain computations with a novel intracellular calcium computation that we hypothesise to be applicable in real Purkinje cells. So notably, our Purkinje cell model can compute, and importantly, relates biophysics to biological information processing. Our Purkinje cell model is biophysically detailed and as a result is very computationally intensive. This means that, whilst it is appropriate for studying properties of the 8 individual Purkinje cell (e.g. relating channel densities to firing properties), it is unsuitable for incorporation into network simulations. We have overcome this by deploying mathematical transforms to produce a simpler, surrogate version of our model that has the same electrical properties, but a lower computational overhead. Our hope is that this model, of intermediate biological fidelity and medium computational complexity, will be used in the future to bridge cellular and network studies and identify how distinctive Purkinje behaviours are important to network and system function

    Information processing in dissociated neuronal cultures of rat hippocampal neurons

    Get PDF
    One of the major aims of Systems Neuroscience is to understand how the nervous system transforms sensory inputs into appropriate motor reactions. In very simple cases sensory neurons are immediately coupled to motoneurons and the entire transformation becomes a simple reflex, in which a noxious signal is immediately transformed into an escape reaction. However, in the most complex behaviours, the nervous system seems to analyse in detail the sensory inputs and is performing some kind of information processing (IP). IP takes place at many different levels of the nervous system: from the peripheral nervous system, where sensory stimuli are detected and converted into electrical pulses, to the central nervous system, where features of sensory stimuli are extracted, perception takes place and actions and motions are coordinated. Moreover, understanding the basic computational properties of the nervous system, besides being at the core of Neuroscience, also arouses great interest even in the field of Neuroengineering and in the field of Computer Science. In fact, being able to decode the neural activity can lead to the development of a new generation of neuroprosthetic devices aimed, for example, at restoring motor functions in severely paralysed patients (Chapin, 2004). On the other side, the development of Artificial Neural Networks (ANNs) (Marr, 1982; Rumelhart & McClelland, 1988; Herz et al., 1981; Hopfield, 1982; Minsky & Papert, 1988) has already proved that the study of biological neural networks may lead to the development and to the design of new computing algorithms and devices. All nervous systems are based on the same elements, the neurons, which are computing devices which, compared to silicon components, are much slower and much less reliable. How are nervous systems of all living species able to survive being based on slow and poorly reliable components? This obvious and na\uefve question is equivalent to characterizing IP in a more quantitative way. In order to study IP and to capture the basic computational properties of the nervous system, two major questions seem to arise. Firstly, which is the fundamental unit of information processing: 2 single neurons or neuronal ensembles? Secondly, how is information encoded in the neuronal firing? These questions - in my view - summarize the problem of the neural code. The subject of my PhD research was to study information processing in dissociated neuronal cultures of rat hippocampal neurons. These cultures, with random connections, provide a more general view of neuronal networks and assemblies, not depending on the circuitry of a neuronal network in vivo, and allow a more detailed and careful experimental investigation. In order to record the activity of a large ensemble of neurons, these neurons were cultured on multielectrode arrays (MEAs) and multi-site stimulation was used to activate different neurons and pathways of the network. In this way, it was possible to vary the properties of the stimulus applied under a controlled extracellular environment. Given this experimental system, my investigation had two major approaches. On one side, I focused my studies on the problem of the neural code, where I studied in particular information processing at the single neuron level and at an ensemble level, investigating also putative neural coding mechanisms. On the other side, I tried to explore the possibility of using biological neurons as computing elements in a task commonly solved by conventional silicon devices: image processing and pattern recognition. The results reported in the first two chapters of my thesis have been published in two separate articles. The third chapter of my thesis represents an article in preparation
    corecore