39 research outputs found

    Effects of the Histone Deacetylase Inhibitor Valproic Acid on Human Pericytes In Vitro

    Get PDF
    Microvascular pericytes are of key importance in neoformation of blood vessels, in stabilization of newly formed vessels as well as maintenance of angiostasis in resting tissues. Furthermore, pericytes are capable of differentiating into pro-fibrotic collagen type I producing fibroblasts. The present study investigates the effects of the histone deacetylase (HDAC) inhibitor valproic acid (VPA) on pericyte proliferation, cell viability, migration and differentiation. The results show that HDAC inhibition through exposure of pericytes to VPA in vitro causes the inhibition of pericyte proliferation and migration with no effect on cell viability. Pericyte exposure to the potent HDAC inhibitor Trichostatin A caused similar effects on pericyte proliferation, migration and cell viability. HDAC inhibition also inhibited pericyte differentiation into collagen type I producing fibroblasts. Given the importance of pericytes in blood vessel biology a qPCR array focusing on the expression of mRNAs coding for proteins that regulate angiogenesis was performed. The results showed that HDAC inhibition promoted transcription of genes involved in vessel stabilization/maturation in human microvascular pericytes. The present in vitro study demonstrates that VPA influences several aspects of microvascular pericyte biology and suggests an alternative mechanism by which HDAC inhibition affects blood vessels. The results raise the possibility that HDAC inhibition inhibits angiogenesis partly through promoting a pericyte phenotype associated with stabilization/maturation of blood vessels

    A multimodal cell census and atlas of the mammalian primary motor cortex

    Get PDF
    ABSTRACT We report the generation of a multimodal cell census and atlas of the mammalian primary motor cortex (MOp or M1) as the initial product of the BRAIN Initiative Cell Census Network (BICCN). This was achieved by coordinated large-scale analyses of single-cell transcriptomes, chromatin accessibility, DNA methylomes, spatially resolved single-cell transcriptomes, morphological and electrophysiological properties, and cellular resolution input-output mapping, integrated through cross-modal computational analysis. Together, our results advance the collective knowledge and understanding of brain cell type organization: First, our study reveals a unified molecular genetic landscape of cortical cell types that congruently integrates their transcriptome, open chromatin and DNA methylation maps. Second, cross-species analysis achieves a unified taxonomy of transcriptomic types and their hierarchical organization that are conserved from mouse to marmoset and human. Third, cross-modal analysis provides compelling evidence for the epigenomic, transcriptomic, and gene regulatory basis of neuronal phenotypes such as their physiological and anatomical properties, demonstrating the biological validity and genomic underpinning of neuron types and subtypes. Fourth, in situ single-cell transcriptomics provides a spatially-resolved cell type atlas of the motor cortex. Fifth, integrated transcriptomic, epigenomic and anatomical analyses reveal the correspondence between neural circuits and transcriptomic cell types. We further present an extensive genetic toolset for targeting and fate mapping glutamatergic projection neuron types toward linking their developmental trajectory to their circuit function. Together, our results establish a unified and mechanistic framework of neuronal cell type organization that integrates multi-layered molecular genetic and spatial information with multi-faceted phenotypic properties

    Neural signals and control of the larynx

    No full text
    The ability of the human brain to represent senses reliably and command a motor response is central to our ability to respond to the world around us. Here, I aim to understand these processes through simulation and experimental analysis. First I developed a recurrent neural network as a model of the brain receiving approximate sensory input. By learning to capture the distribution of the representation of a sense, the network learns the dynamics of the underlying stimulus and learns to integrate information near optimally over time, using recent estimates of position and velocity to inform the current estimate of the state of the object.Next, I analyzed the cortical representation of auditory speech. Syllables were played to human subjects while recording voltage fluctuations directly from their brains using electrocorticography. I found that the variability of the neural activity in the superior temporal gyrus, an auditory cortical region, was “quenched” upon stimulus presentation. Furthermore, this decrease in variability is coincident with stimulus representation, and enables the brain to represent a stimulus more accurately.Then, I examined the cortical control of laryngeal functions in humans using electrocorticography during produced speech. I found that the dorsal laryngeal motor cortex controls modulations of vocal pitch. Activity in that region is correlated with pitch in speech and in song. The representation of pitch in this region is separable from voicing, showing multiple dimensions of control represented in the cortex. Through cortical stimulation, I show that activity in this area caused proportional laryngeal muscle activation. I discuss how these findings may add important information furthering our understanding the evolution of speech in humans. Finally, I discuss how these signals can be used to decode prosodic patterns directly from neural activity for use in a speech prosthetic

    Learning to Estimate Dynamical State with Probabilistic Population Codes.

    No full text
    Tracking moving objects, including one's own body, is a fundamental ability of higher organisms, playing a central role in many perceptual and motor tasks. While it is unknown how the brain learns to follow and predict the dynamics of objects, it is known that this process of state estimation can be learned purely from the statistics of noisy observations. When the dynamics are simply linear with additive Gaussian noise, the optimal solution is the well known Kalman filter (KF), the parameters of which can be learned via latent-variable density estimation (the EM algorithm). The brain does not, however, directly manipulate matrices and vectors, but instead appears to represent probability distributions with the firing rates of population of neurons, "probabilistic population codes." We show that a recurrent neural network-a modified form of an exponential family harmonium (EFH)-that takes a linear probabilistic population code as input can learn, without supervision, to estimate the state of a linear dynamical system. After observing a series of population responses (spike counts) to the position of a moving object, the network learns to represent the velocity of the object and forms nearly optimal predictions about the position at the next time-step. This result builds on our previous work showing that a similar network can learn to perform multisensory integration and coordinate transformations for static stimuli. The receptive fields of the trained network also make qualitative predictions about the developing and learning brain: tuning gradually emerges for higher-order dynamical states not explicitly present in the inputs, appearing as delayed tuning for the lower-order states

    Dynamic Structure of Neural Variability in the Cortical Representation of Speech Sounds

    No full text
    Accurate sensory discrimination is commonly believed to require precise representations in the nervous system; however, neural stimulus responses can be highly variable, even to identical stimuli. Recent studies suggest that cortical response variability decreases during stimulus processing, but the implications of such effects on stimulus discrimination are unclear. To address this, we examined electrocorticographic cortical field potential recordings from the human nonprimary auditory cortex (superior temporal gyrus) while subjects listened to speech syllables. Compared with a prestimulus baseline, activation variability decreased upon stimulus onset, similar to findings from microelectrode recordings in animal studies. We found that this decrease was simultaneous with encoding and spatially specific for those electrodes that most strongly discriminated speech sounds. We also found that variability was predominantly reduced in a correlated subspace across electrodes. We then compared signal and variability (noise) correlations and found that noise correlations reduce more for electrodes with strong signal correlations. Furthermore, we found that this decrease in variability is strongest in the high gamma band, which correlates with firing rate response. Together, these findings indicate that the structure of single-trial response variability is shaped to enhance discriminability despite non–stimulus-related noise. SIGNIFICANCE STATEMENT Cortical responses can be highly variable to auditory speech sounds. Despite this, sensory perception can be remarkably stable. Here, we recorded from the human superior temporal gyrus, a high-order auditory cortex, and studied the changes in the cortical representation of speech stimuli across multiple repetitions. We found that neural variability is reduced upon stimulus onset across electrodes that encode speech sounds

    Network sensitivity to instantaneous reliability.

    No full text
    <p>The instantaneous (one-time-step) reliability of sensory information is determined by the total number of spikes across the sensory population within one time step. An optimal filter will up-weight sensory information that is more reliable (and vice versa). If such a filter is run on <i>noiseless</i> sensory data, then its errors will be smaller for sensory input with more total spikes (higher gain), since it will up-weight the perfect sensory information. (A) Box-and-whisker plot (interpretation as in <a href="http://www.ploscompbiol.org/article/info:doi/10.1371/journal.pcbi.1004554#pcbi.1004554.g004" target="_blank">Fig 4</a>) of mean squared errors for the optimal model (OPT), when tested on noiseless sensory data and a range of gains. For each gain on the abscissa, the filter was tested on 12 sets of 320 trajectories apiece, for which the sensory gain was fixed throughout. Higher-gain trajectories yield lower mean errors, as expected. (B) The same plot for the network (rEFH). The magnitude of the MSEs is larger than for the optimal filter, as in <a href="http://www.ploscompbiol.org/article/info:doi/10.1371/journal.pcbi.1004554#pcbi.1004554.g001" target="_blank">Fig 1E</a>, but the pattern is the same, showing that the rEFH has indeed learned to treat higher-gain (more-spikes) sensory information as more reliable.</p
    corecore