971 research outputs found

    The complete mitochondrial genome of Yarrowia lipolytica

    Get PDF
    We here report the complete nucleotide sequence of the 47.9 kb mitochondrial (mt) genome from the obligate aerobic yeast Yarrowia lipolytica. It encodes, all on the same strand, seven subunits of NADH: ubiquinone oxidoreductase (ND1-6, ND4L), apocytochrome b (COB), three subunits of cytochrome oxidase (COX1, 2, 3), three subunits of ATP synthetase (ATP6, 8 and 9), small and large ribosomal RNAs and an incomplete set of tRNAs. The Y. lipolytica mt genome is very similar to the Hansenula wingei mt genome, as judged from blocks of conserved gene order and from sequence homology. The extra DNA in the Y. lipolytica mt genome consists of 17 group 1 introns and stretches of A+Trich sequence, interspersed with potentially transposable GC clusters. The usual mould mt genetic code is used. Interestingly, there is no tRNA able to read CGN (arginine) codons. CGN codons could not be found in exonic open reading frames, whereas they do occur in intronic open reading frames. However, several of the intronic open reading frames have accumulated mutations and must be regarded as pseudogenes. We propose that this may have been triggered by the presence of untranslatable CGN codons. This sequence is available under EMBL Accession No. AJ307410

    A biophysical model of decision making in an antisaccade task through variable climbing activity

    Get PDF
    We present a biophysical model of saccade initiation based on competitive integration of planned and reactive cortical saccade decision signals in the intermediate layer of the superior colliculus. In the model, the variable slopes of the climbing activities of the input cortical decision signals are produced from variability in the conductances of Na+, K+, Ca2+ activated K+, NMDA and GABA currents. These cortical decision signals are integrated in the activities of buildup neurons in the intermediate layer of the superior colliculus, whose activities grow nonlinearly towards a preset criterion level. When the level is crossed, a movement is initiated. The resultant model reproduces the unimodal distributions of saccade reaction times (SRTs) for correct antisaccades and erroneous prosaccades as well as the variability of SRTs (ranging from 80ms to 600ms) and the overall 25% of erroneous prosaccade responses in a large sample of 2006 young men performing an antisaccade task

    Amphetamine Exerts Dose-Dependent Changes in Prefrontal Cortex Attractor Dynamics during Working Memory

    Get PDF
    Modulation of neural activity by monoamine neurotransmitters is thought to play an essential role in shaping computational neurodynamics in the neocortex, especially in prefrontal regions. Computational theories propose that monoamines may exert bidirectional (concentration-dependent) effects on cognition by altering prefrontal cortical attractor dynamics according to an inverted U-shaped function. To date, this hypothesis has not been addressed directly, in part because of the absence of appropriate statistical methods required to assess attractor-like behavior in vivo. The present study used a combination of advanced multivariate statistical, time series analysis, and machine learning methods to assess dynamic changes in network activity from multiple single-unit recordings from the medial prefrontal cortex (mPFC) of rats while the animals performed a foraging task guided by working memory after pretreatment with different doses of d-amphetamine (AMPH), which increases monoamine efflux in the mPFC. A dose-dependent, bidirectional effect of AMPH on neural dynamics in the mPFC was observed. Specifically, a 1.0 mg/kg dose of AMPH accentuated separation between task-epoch-specific population states and convergence toward these states. In contrast, a 3.3 mg/kg dose diminished separation and convergence toward task-epoch-specific population states, which was paralleled by deficits in cognitive performance. These results support the computationally derived hypothesis that moderate increases in monoamine efflux would enhance attractor stability, whereas high frontal monoamine levels would severely diminish it. Furthermore, they are consistent with the proposed inverted U-shaped and concentration-dependent modulation of cortical efficiency by monoamines

    Complexity without chaos: Plasticity within random recurrent networks generates robust timing and motor control

    Get PDF
    It is widely accepted that the complex dynamics characteristic of recurrent neural circuits contributes in a fundamental manner to brain function. Progress has been slow in understanding and exploiting the computational power of recurrent dynamics for two main reasons: nonlinear recurrent networks often exhibit chaotic behavior and most known learning rules do not work in robust fashion in recurrent networks. Here we address both these problems by demonstrating how random recurrent networks (RRN) that initially exhibit chaotic dynamics can be tuned through a supervised learning rule to generate locally stable neural patterns of activity that are both complex and robust to noise. The outcome is a novel neural network regime that exhibits both transiently stable and chaotic trajectories. We further show that the recurrent learning rule dramatically increases the ability of RRNs to generate complex spatiotemporal motor patterns, and accounts for recent experimental data showing a decrease in neural variability in response to stimulus onset

    A State Space Approach for Piecewise-Linear Recurrent Neural Networks for Reconstructing Nonlinear Dynamics from Neural Measurements

    Full text link
    The computational properties of neural systems are often thought to be implemented in terms of their network dynamics. Hence, recovering the system dynamics from experimentally observed neuronal time series, like multiple single-unit (MSU) recordings or neuroimaging data, is an important step toward understanding its computations. Ideally, one would not only seek a state space representation of the dynamics, but would wish to have access to its governing equations for in-depth analysis. Recurrent neural networks (RNNs) are a computationally powerful and dynamically universal formal framework which has been extensively studied from both the computational and the dynamical systems perspective. Here we develop a semi-analytical maximum-likelihood estimation scheme for piecewise-linear RNNs (PLRNNs) within the statistical framework of state space models, which accounts for noise in both the underlying latent dynamics and the observation process. The Expectation-Maximization algorithm is used to infer the latent state distribution, through a global Laplace approximation, and the PLRNN parameters iteratively. After validating the procedure on toy examples, the approach is applied to MSU recordings from the rodent anterior cingulate cortex obtained during performance of a classical working memory task, delayed alternation. A model with 5 states turned out to be sufficient to capture the essential computational dynamics underlying task performance, including stimulus-selective delay activity. The estimated models were rarely multi-stable, but rather were tuned to exhibit slow dynamics in the vicinity of a bifurcation point. In summary, the present work advances a semi-analytical (thus reasonably fast) maximum-likelihood estimation framework for PLRNNs that may enable to recover the relevant dynamics underlying observed neuronal time series, and directly link them to computational properties

    Editorial: Metastable Dynamics of Neural Ensembles

    Get PDF
    A classical view of neural computation is that it can be characterized in terms of convergence to attractor states or sequential transitions among states in a noisy background. After over three decades, is this still a valid model of how brain dynamics implements cognition? This book provides a comprehensive collection of recent theoretical and experimental contributions addressing the question of stable versus transient neural population dynamics from complementary angles. These studies showcase recent efforts for designing a framework that encompasses the multiple facets of metastability in neural responses, one of the most exciting topics currently in systems and computational neuroscience

    Ready ... Go: Amplitude of the fMRI Signal Encodes Expectation of Cue Arrival Time

    Get PDF
    What happens when the brain awaits a signal of uncertain arrival time, as when a sprinter waits for the starting pistol? And what happens just after the starting pistol fires? Using functional magnetic resonance imaging (fMRI), we have discovered a novel correlate of temporal expectations in several brain regions, most prominently in the supplementary motor area (SMA). Contrary to expectations, we found little fMRI activity during the waiting period; however, a large signal appears after the “go” signal, the amplitude of which reflects learned expectations about the distribution of possible waiting times. Specifically, the amplitude of the fMRI signal appears to encode a cumulative conditional probability, also known as the cumulative hazard function. The fMRI signal loses its dependence on waiting time in a “countdown” condition in which the arrival time of the go cue is known in advance, suggesting that the signal encodes temporal probabilities rather than simply elapsed time. The dependence of the signal on temporal expectation is present in “no-go” conditions, demonstrating that the effect is not a consequence of motor output. Finally, the encoding is not dependent on modality, operating in the same manner with auditory or visual signals. This finding extends our understanding of the relationship between temporal expectancy and measurable neural signals

    Detecting Multiple Change Points Using Adaptive Regression Splines With Application to Neural Recordings

    Get PDF
    Time series, as frequently the case in neuroscience, are rarely stationary, but often exhibit abrupt changes due to attractor transitions or bifurcations in the dynamical systems producing them. A plethora of methods for detecting such change points in time series statistics have been developed over the years, in addition to test criteria to evaluate their significance. Issues to consider when developing change point analysis methods include computational demands, difficulties arising from either limited amount of data or a large number of covariates, and arriving at statistical tests with sufficient power to detect as many changes as contained in potentially high-dimensional time series. Here, a general method called Paired Adaptive Regressors for Cumulative Sum is developed for detecting multiple change points in the mean of multivariate time series. The method's advantages over alternative approaches are demonstrated through a series of simulation experiments. This is followed by a real data application to neural recordings from rat medial prefrontal cortex during learning. Finally, the method's flexibility to incorporate useful features from state-of-the-art change point detection techniques is discussed, along with potential drawbacks and suggestions to remedy them
    corecore