4,017 research outputs found

    Nanodiamonds-induced effects on neuronal firing of mouse hippocampal microcircuits

    Get PDF
    Fluorescent nanodiamonds (FND) are carbon-based nanomaterials that can efficiently incorporate optically active photoluminescent centers such as the nitrogen-vacancy complex, thus making them promising candidates as optical biolabels and drug-delivery agents. FNDs exhibit bright fluorescence without photobleaching combined with high uptake rate and low cytotoxicity. Focusing on FNDs interference with neuronal function, here we examined their effect on cultured hippocampal neurons, monitoring the whole network development as well as the electrophysiological properties of single neurons. We observed that FNDs drastically decreased the frequency of inhibitory (from 1.81 Hz to 0.86 Hz) and excitatory (from 1.61 Hz to 0.68 Hz) miniature postsynaptic currents, and consistently reduced action potential (AP) firing frequency (by 36%), as measured by microelectrode arrays. On the contrary, bursts synchronization was preserved, as well as the amplitude of spontaneous inhibitory and excitatory events. Current-clamp recordings revealed that the ratio of neurons responding with AP trains of high-frequency (fast-spiking) versus neurons responding with trains of low-frequency (slow-spiking) was unaltered, suggesting that FNDs exerted a comparable action on neuronal subpopulations. At the single cell level, rapid onset of the somatic AP ("kink") was drastically reduced in FND-treated neurons, suggesting a reduced contribution of axonal and dendritic components while preserving neuronal excitability.Comment: 34 pages, 9 figure

    Spike Code Flow in Cultured Neuronal Networks

    Get PDF
    We observed spike trains produced by one-shot electrical stimulation with 8 × 8 multielectrodes in cultured neuronal networks. Each electrode accepted spikes from several neurons. We extracted the short codes from spike trains and obtained a code spectrum with a nominal time accuracy of 1%. We then constructed code flow maps as movies of the electrode array to observe the code flow of “1101” and “1011,” which are typical pseudorandom sequence such as that we often encountered in a literature and our experiments. They seemed to flow from one electrode to the neighboring one and maintained their shape to some extent. To quantify the flow, we calculated the “maximum cross-correlations” among neighboring electrodes, to find the direction of maximum flow of the codes with lengths less than 8. Normalized maximum cross-correlations were almost constant irrespective of code. Furthermore, if the spike trains were shuffled in interval orders or in electrodes, they became significantly small. Thus, the analysis suggested that local codes of approximately constant shape propagated and conveyed information across the network. Hence, the codes can serve as visible and trackable marks of propagating spike waves as well as evaluating information flow in the neuronal network

    Emergence of Spatio-Temporal Pattern Formation and Information Processing in the Brain.

    Full text link
    The spatio-temporal patterns of neuronal activity are thought to underlie cognitive functions, such as our thoughts, perceptions, and emotions. Neurons and glial cells, specifically astrocytes, are interconnected in complex networks, where large-scale dynamical patterns emerge from local chemical and electrical signaling between individual network components. How these emergent patterns form and encode for information is the focus of this dissertation. I investigate how various mechanisms that can coordinate collections of neurons in their patterns of activity can potentially cause the interactions across spatial and temporal scales, which are necessary for emergent macroscopic phenomena to arise. My work explores the coordination of network dynamics through pattern formation and synchrony in both experiments and simulations. I concentrate on two potential mechanisms: astrocyte signaling and neuronal resonance properties. Due to their ability to modulate neurons, we investigate the role of astrocytic networks as a potential source for coordinating neuronal assemblies. In cultured networks, I image patterns of calcium signaling between astrocytes, and reproduce observed properties of the network calcium patterning and perturbations with a simple model that incorporates the mechanisms of astrocyte communication. Understanding the modes of communication in astrocyte networks and how they form spatial temporal patterns of their calcium dynamics is important to understanding their interaction with neuronal networks. We investigate this interaction between networks and how glial cells modulate neuronal dynamics through microelectrode array measurements of neuronal network dynamics. We quantify the spontaneous electrical activity patterns of neurons and show the effect of glia on the neuronal dynamics and synchrony. Through a computational approach I investigate an entirely different theoretical mechanism for coordinating ensembles of neurons. I show in a computational model how biophysical resonance shifts in individual neurons can interact with the network topology to influence pattern formation and separation. I show that sub-threshold neuronal depolarization, potentially from astrocytic modulation among other sources, can shift neurons into and out of resonance with specific bands of existing extracellular oscillations. This can act as a dynamic readout mechanism during information storage and retrieval. Exploring these mechanisms that facilitate emergence are necessary for understanding information processing in the brain.PHDApplied PhysicsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/111493/1/lshtrah_1.pd

    Fractals in the Nervous System: conceptual Implications for Theoretical Neuroscience

    Get PDF
    This essay is presented with two principal objectives in mind: first, to document the prevalence of fractals at all levels of the nervous system, giving credence to the notion of their functional relevance; and second, to draw attention to the as yet still unresolved issues of the detailed relationships among power law scaling, self-similarity, and self-organized criticality. As regards criticality, I will document that it has become a pivotal reference point in Neurodynamics. Furthermore, I will emphasize the not yet fully appreciated significance of allometric control processes. For dynamic fractals, I will assemble reasons for attributing to them the capacity to adapt task execution to contextual changes across a range of scales. The final Section consists of general reflections on the implications of the reviewed data, and identifies what appear to be issues of fundamental importance for future research in the rapidly evolving topic of this review

    Stochasticity in the miR-9/Hes1 oscillatory network can account for clonal heterogeneity in the timing of differentiation.

    Get PDF
    Recent studies suggest that cells make stochastic choices with respect to differentiation or division. However, the molecular mechanism underlying such stochasticity is unknown. We previously proposed that the timing of vertebrate neuronal differentiation is regulated by molecular oscillations of a transcriptional repressor, HES1, tuned by a post-transcriptional repressor, miR-9. Here, we computationally model the effects of intrinsic noise on the Hes1/miR-9 oscillator as a consequence of low molecular numbers of interacting species, determined experimentally. We report that increased stochasticity spreads the timing of differentiation in a population, such that initially equivalent cells differentiate over a period of time. Surprisingly, inherent stochasticity also increases the robustness of the progenitor state and lessens the impact of unequal, random distribution of molecules at cell division on the temporal spread of differentiation at the population level. This advantageous use of biological noise contrasts with the view that noise needs to be counteracted

    Contrastive Hebbian Learning with Random Feedback Weights

    Full text link
    Neural networks are commonly trained to make predictions through learning algorithms. Contrastive Hebbian learning, which is a powerful rule inspired by gradient backpropagation, is based on Hebb's rule and the contrastive divergence algorithm. It operates in two phases, the forward (or free) phase, where the data are fed to the network, and a backward (or clamped) phase, where the target signals are clamped to the output layer of the network and the feedback signals are transformed through the transpose synaptic weight matrices. This implies symmetries at the synaptic level, for which there is no evidence in the brain. In this work, we propose a new variant of the algorithm, called random contrastive Hebbian learning, which does not rely on any synaptic weights symmetries. Instead, it uses random matrices to transform the feedback signals during the clamped phase, and the neural dynamics are described by first order non-linear differential equations. The algorithm is experimentally verified by solving a Boolean logic task, classification tasks (handwritten digits and letters), and an autoencoding task. This article also shows how the parameters affect learning, especially the random matrices. We use the pseudospectra analysis to investigate further how random matrices impact the learning process. Finally, we discuss the biological plausibility of the proposed algorithm, and how it can give rise to better computational models for learning

    Learning intrinsic excitability in medium spiny neurons

    Full text link
    We present an unsupervised, local activation-dependent learning rule for intrinsic plasticity (IP) which affects the composition of ion channel conductances for single neurons in a use-dependent way. We use a single-compartment conductance-based model for medium spiny striatal neurons in order to show the effects of parametrization of individual ion channels on the neuronal activation function. We show that parameter changes within the physiological ranges are sufficient to create an ensemble of neurons with significantly different activation functions. We emphasize that the effects of intrinsic neuronal variability on spiking behavior require a distributed mode of synaptic input and can be eliminated by strongly correlated input. We show how variability and adaptivity in ion channel conductances can be utilized to store patterns without an additional contribution by synaptic plasticity (SP). The adaptation of the spike response may result in either "positive" or "negative" pattern learning. However, read-out of stored information depends on a distributed pattern of synaptic activity to let intrinsic variability determine spike response. We briefly discuss the implications of this conditional memory on learning and addiction.Comment: 20 pages, 8 figure
    corecore