92 research outputs found

    Species distribution and in vitro antifungal susceptibility of oral yeast isolates from Tanzanian HIV-infected patients with primary and recurrent oropharyngeal candidiasis

    Get PDF
    \ud In Tanzania, little is known on the species distribution and antifungal susceptibility profiles of yeast isolates from HIV-infected patients with primary and recurrent oropharyngeal candidiasis. A total of 296 clinical oral yeasts were isolated from 292 HIV-infected patients with oropharyngeal candidiasis at the Muhimbili National Hospital, Dar es Salaam, Tanzania. Identification of the yeasts was performed using standard phenotypic methods. Antifungal susceptibility to fluconazole, itraconazole, miconazole, clotrimazole, amphotericin B and nystatin was assessed using a broth microdilution format according to the guidelines of the Clinical and Laboratory Standard Institute (CLSI; M27-A2). Candida albicans was the most frequently isolated species from 250 (84.5%) patients followed by C. glabrata from 20 (6.8%) patients, and C. krusei from 10 (3.4%) patients. There was no observed significant difference in species distribution between patients with primary and recurrent oropharyngeal candidiasis, but isolates cultured from patients previously treated were significantly less susceptible to the azole compounds compared to those cultured from antifungal naΓ―ve patients. C. albicans was the most frequently isolated species from patients with oropharyngeal candidiasis. Oral yeast isolates from Tanzania had high level susceptibility to the antifungal agents tested. Recurrent oropharyngeal candidiasis and previous antifungal therapy significantly correlated with reduced susceptibility to azoles antifungal agents.\u

    Intrinsic gain modulation and adaptive neural coding

    Get PDF
    In many cases, the computation of a neural system can be reduced to a receptive field, or a set of linear filters, and a thresholding function, or gain curve, which determines the firing probability; this is known as a linear/nonlinear model. In some forms of sensory adaptation, these linear filters and gain curve adjust very rapidly to changes in the variance of a randomly varying driving input. An apparently similar but previously unrelated issue is the observation of gain control by background noise in cortical neurons: the slope of the firing rate vs current (f-I) curve changes with the variance of background random input. Here, we show a direct correspondence between these two observations by relating variance-dependent changes in the gain of f-I curves to characteristics of the changing empirical linear/nonlinear model obtained by sampling. In the case that the underlying system is fixed, we derive relationships relating the change of the gain with respect to both mean and variance with the receptive fields derived from reverse correlation on a white noise stimulus. Using two conductance-based model neurons that display distinct gain modulation properties through a simple change in parameters, we show that coding properties of both these models quantitatively satisfy the predicted relationships. Our results describe how both variance-dependent gain modulation and adaptive neural computation result from intrinsic nonlinearity.Comment: 24 pages, 4 figures, 1 supporting informatio

    Spike Timing Dependent Plasticity Finds the Start of Repeating Patterns in Continuous Spike Trains

    Get PDF
    Experimental studies have observed Long Term synaptic Potentiation (LTP) when a presynaptic neuron fires shortly before a postsynaptic neuron, and Long Term Depression (LTD) when the presynaptic neuron fires shortly after, a phenomenon known as Spike Timing Dependant Plasticity (STDP). When a neuron is presented successively with discrete volleys of input spikes STDP has been shown to learn β€˜early spike patterns’, that is to concentrate synaptic weights on afferents that consistently fire early, with the result that the postsynaptic spike latency decreases, until it reaches a minimal and stable value. Here, we show that these results still stand in a continuous regime where afferents fire continuously with a constant population rate. As such, STDP is able to solve a very difficult computational problem: to localize a repeating spatio-temporal spike pattern embedded in equally dense β€˜distractor’ spike trains. STDP thus enables some form of temporal coding, even in the absence of an explicit time reference. Given that the mechanism exposed here is simple and cheap it is hard to believe that the brain did not evolve to use it

    A Model of Late Long-Term Potentiation Simulates Aspects of Memory Maintenance

    Get PDF
    Late long-term potentiation (L-LTP) appears essential for the formation of long-term memory, with memories at least partly encoded by patterns of strengthened synapses. How memories are preserved for months or years, despite molecular turnover, is not well understood. Ongoing recurrent neuronal activity, during memory recall or during sleep, has been hypothesized to preferentially potentiate strong synapses, preserving memories. This hypothesis has not been evaluated in the context of a mathematical model representing biochemical pathways important for L-LTP. I incorporated ongoing activity into two such models: a reduced model that represents some of the essential biochemical processes, and a more detailed published model. The reduced model represents synaptic tagging and gene induction intuitively, and the detailed model adds activation of essential kinases by Ca. Ongoing activity was modeled as continual brief elevations of [Ca]. In each model, two stable states of synaptic weight resulted. Positive feedback between synaptic weight and the amplitude of ongoing Ca transients underlies this bistability. A tetanic or theta-burst stimulus switches a model synapse from a low weight to a high weight stabilized by ongoing activity. Bistability was robust to parameter variations. Simulations illustrated that prolonged decreased activity reset synapses to low weights, suggesting a plausible forgetting mechanism. However, episodic activity with shorter inactive intervals maintained strong synapses. Both models support experimental predictions. Tests of these predictions are expected to further understanding of how neuronal activity is coupled to maintenance of synaptic strength.Comment: Accepted to PLoS One. 8 figures at en

    Multiple-Color Optical Activation, Silencing, and Desynchronization of Neural Activity, with Single-Spike Temporal Resolution

    Get PDF
    The quest to determine how precise neural activity patterns mediate computation, behavior, and pathology would be greatly aided by a set of tools for reliably activating and inactivating genetically targeted neurons, in a temporally precise and rapidly reversible fashion. Having earlier adapted a light-activated cation channel, channelrhodopsin-2 (ChR2), for allowing neurons to be stimulated by blue light, we searched for a complementary tool that would enable optical neuronal inhibition, driven by light of a second color. Here we report that targeting the codon-optimized form of the light-driven chloride pump halorhodopsin from the archaebacterium Natronomas pharaonis (hereafter abbreviated Halo) to genetically-specified neurons enables them to be silenced reliably, and reversibly, by millisecond-timescale pulses of yellow light. We show that trains of yellow and blue light pulses can drive high-fidelity sequences of hyperpolarizations and depolarizations in neurons simultaneously expressing yellow light-driven Halo and blue light-driven ChR2, allowing for the first time manipulations of neural synchrony without perturbation of other parameters such as spiking rates. The Halo/ChR2 system thus constitutes a powerful toolbox for multichannel photoinhibition and photostimulation of virally or transgenically targeted neural circuits without need for exogenous chemicals, enabling systematic analysis and engineering of the brain, and quantitative bioengineering of excitable cells

    NeuroML: A Language for Describing Data Driven Models of Neurons and Networks with a High Degree of Biological Detail

    Get PDF
    Biologically detailed single neuron and network models are important for understanding how ion channels, synapses and anatomical connectivity underlie the complex electrical behavior of the brain. While neuronal simulators such as NEURON, GENESIS, MOOSE, NEST, and PSICS facilitate the development of these data-driven neuronal models, the specialized languages they employ are generally not interoperable, limiting model accessibility and preventing reuse of model components and cross-simulator validation. To overcome these problems we have used an Open Source software approach to develop NeuroML, a neuronal model description language based on XML (Extensible Markup Language). This enables these detailed models and their components to be defined in a standalone form, allowing them to be used across multiple simulators and archived in a standardized format. Here we describe the structure of NeuroML and demonstrate its scope by converting into NeuroML models of a number of different voltage- and ligand-gated conductances, models of electrical coupling, synaptic transmission and short-term plasticity, together with morphologically detailed models of individual neurons. We have also used these NeuroML-based components to develop an highly detailed cortical network model. NeuroML-based model descriptions were validated by demonstrating similar model behavior across five independently developed simulators. Although our results confirm that simulations run on different simulators converge, they reveal limits to model interoperability, by showing that for some models convergence only occurs at high levels of spatial and temporal discretisation, when the computational overhead is high. Our development of NeuroML as a common description language for biophysically detailed neuronal and network models enables interoperability across multiple simulation environments, thereby improving model transparency, accessibility and reuse in computational neuroscience

    Effective Stimuli for Constructing Reliable Neuron Models

    Get PDF
    The rich dynamical nature of neurons poses major conceptual and technical challenges for unraveling their nonlinear membrane properties. Traditionally, various current waveforms have been injected at the soma to probe neuron dynamics, but the rationale for selecting specific stimuli has never been rigorously justified. The present experimental and theoretical study proposes a novel framework, inspired by learning theory, for objectively selecting the stimuli that best unravel the neuron's dynamics. The efficacy of stimuli is assessed in terms of their ability to constrain the parameter space of biophysically detailed conductance-based models that faithfully replicate the neuron's dynamics as attested by their ability to generalize well to the neuron's response to novel experimental stimuli. We used this framework to evaluate a variety of stimuli in different types of cortical neurons, ages and animals. Despite their simplicity, a set of stimuli consisting of step and ramp current pulses outperforms synaptic-like noisy stimuli in revealing the dynamics of these neurons. The general framework that we propose paves a new way for defining, evaluating and standardizing effective electrical probing of neurons and will thus lay the foundation for a much deeper understanding of the electrical nature of these highly sophisticated and non-linear devices and of the neuronal networks that they compose

    Representation of Dynamical Stimuli in Populations of Threshold Neurons

    Get PDF
    Many sensory or cognitive events are associated with dynamic current modulations in cortical neurons. This raises an urgent demand for tractable model approaches addressing the merits and limits of potential encoding strategies. Yet, current theoretical approaches addressing the response to mean- and variance-encoded stimuli rarely provide complete response functions for both modes of encoding in the presence of correlated noise. Here, we investigate the neuronal population response to dynamical modifications of the mean or variance of the synaptic bombardment using an alternative threshold model framework. In the variance and mean channel, we provide explicit expressions for the linear and non-linear frequency response functions in the presence of correlated noise and use them to derive population rate response to step-like stimuli. For mean-encoded signals, we find that the complete response function depends only on the temporal width of the input correlation function, but not on other functional specifics. Furthermore, we show that both mean- and variance-encoded signals can relay high-frequency inputs, and in both schemes step-like changes can be detected instantaneously. Finally, we obtain the pairwise spike correlation function and the spike triggered average from the linear mean-evoked response function. These results provide a maximally tractable limiting case that complements and extends previous results obtained in the integrate and fire framework

    Calmodulin Activation by Calcium Transients in the Postsynaptic Density of Dendritic Spines

    Get PDF
    The entry of calcium into dendritic spines can trigger a sequence of biochemical reactions that begins with the activation of calmodulin (CaM) and ends with long-term changes to synaptic strengths. The degree of activation of CaM can depend on highly local elevations in the concentration of calcium and the duration of transient increases in calcium concentration. Accurate measurement of these local changes in calcium is difficult because the spaces are so small and the numbers of molecules are so low. We have therefore developed a Monte Carlo model of intracellular calcium dynamics within the spine that included calcium binding proteins, calcium transporters and ion channels activated by voltage and glutamate binding. The model reproduced optical recordings using calcium indicator dyes and showed that without the dye the free intracellular calcium concentration transient was much higher than predicted from the fluorescent signal. Excitatory postsynaptic potentials induced large, long-lasting calcium gradients across the postsynaptic density, which activated CaM. When glutamate was released at the synapse 10 ms before an action potential occurred, simulating activity patterns that strengthen hippocampal synapses, the calcium gradient and activation of CaM in the postsynaptic density were much greater than when the order was reversed, a condition that decreases synaptic strengths, suggesting a possible mechanism underlying the induction of long-term changes in synaptic strength. The spatial and temporal mechanisms for selectivity in CaM activation demonstrated here could be used in other signaling pathways
    • …
    corecore