291 research outputs found

    On the difficulty of learning chaotic dynamics with RNNs

    Full text link
    Recurrent neural networks (RNNs) are wide-spread machine learning tools for modeling sequential and time series data. They are notoriously hard to train because their loss gradients backpropagated in time tend to saturate or diverge during training. This is known as the exploding and vanishing gradient problem. Previous solutions to this issue either built on rather complicated, purpose-engineered architectures with gated memory buffers, or - more recently - imposed constraints that ensure convergence to a fixed point or restrict (the eigenspectrum of) the recurrence matrix. Such constraints, however, convey severe limitations on the expressivity of the RNN. Essential intrinsic dynamics such as multistability or chaos are disabled. This is inherently at disaccord with the chaotic nature of many, if not most, time series encountered in nature and society. It is particularly problematic in scientific applications where one aims to reconstruct the underlying dynamical system. Here we offer a comprehensive theoretical treatment of this problem by relating the loss gradients during RNN training to the Lyapunov spectrum of RNN-generated orbits. We mathematically prove that RNNs producing stable equilibrium or cyclic behavior have bounded gradients, whereas the gradients of RNNs with chaotic dynamics always diverge. Based on these analyses and insights we suggest ways of how to optimize the training process on chaotic data according to the system's Lyapunov spectrum, regardless of the employed RNN architecture

    Complexity without chaos: Plasticity within random recurrent networks generates robust timing and motor control

    Get PDF
    It is widely accepted that the complex dynamics characteristic of recurrent neural circuits contributes in a fundamental manner to brain function. Progress has been slow in understanding and exploiting the computational power of recurrent dynamics for two main reasons: nonlinear recurrent networks often exhibit chaotic behavior and most known learning rules do not work in robust fashion in recurrent networks. Here we address both these problems by demonstrating how random recurrent networks (RRN) that initially exhibit chaotic dynamics can be tuned through a supervised learning rule to generate locally stable neural patterns of activity that are both complex and robust to noise. The outcome is a novel neural network regime that exhibits both transiently stable and chaotic trajectories. We further show that the recurrent learning rule dramatically increases the ability of RRNs to generate complex spatiotemporal motor patterns, and accounts for recent experimental data showing a decrease in neural variability in response to stimulus onset

    Can we identify non-stationary dynamics of trial-to-trial variability?"

    Get PDF
    Identifying sources of the apparent variability in non-stationary scenarios is a fundamental problem in many biological data analysis settings. For instance, neurophysiological responses to the same task often vary from each repetition of the same experiment (trial) to the next. The origin and functional role of this observed variability is one of the fundamental questions in neuroscience. The nature of such trial-to-trial dynamics however remains largely elusive to current data analysis approaches. A range of strategies have been proposed in modalities such as electro-encephalography but gaining a fundamental insight into latent sources of trial-to-trial variability in neural recordings is still a major challenge. In this paper, we present a proof-of-concept study to the analysis of trial-to-trial variability dynamics founded on non-autonomous dynamical systems. At this initial stage, we evaluate the capacity of a simple statistic based on the behaviour of trajectories in classification settings, the trajectory coherence, in order to identify trial-to-trial dynamics. First, we derive the conditions leading to observable changes in datasets generated by a compact dynamical system (the Duffing equation). This canonical system plays the role of a ubiquitous model of non-stationary supervised classification problems. Second, we estimate the coherence of class-trajectories in empirically reconstructed space of system states. We show how this analysis can discern variations attributable to non-autonomous deterministic processes from stochastic fluctuations. The analyses are benchmarked using simulated and two different real datasets which have been shown to exhibit attractor dynamics. As an illustrative example, we focused on the analysis of the rat's frontal cortex ensemble dynamics during a decision-making task. Results suggest that, in line with recent hypotheses, rather than internal noise, it is the deterministic trend which most likely underlies the observed trial-to-trial variability. Thus, the empirical tool developed within this study potentially allows us to infer the source of variability in in-vivo neural recordings

    Multiscale Computations on Neural Networks: From the Individual Neuron Interactions to the Macroscopic-Level Analysis

    Full text link
    We show how the Equation-Free approach for multi-scale computations can be exploited to systematically study the dynamics of neural interactions on a random regular connected graph under a pairwise representation perspective. Using an individual-based microscopic simulator as a black box coarse-grained timestepper and with the aid of simulated annealing we compute the coarse-grained equilibrium bifurcation diagram and analyze the stability of the stationary states sidestepping the necessity of obtaining explicit closures at the macroscopic level. We also exploit the scheme to perform a rare-events analysis by estimating an effective Fokker-Planck describing the evolving probability density function of the corresponding coarse-grained observables

    Contribution of comorbidities to functional impairment is higher in heart failure with preserved than with reduced ejection fraction

    Get PDF
    Background Comorbidities negatively affect prognosis more strongly in heart failure with preserved (HFpEF) than with reduced (HFrEF) ejection fraction. Their comparative impact on physical impairment in HFpEF and HFrEF has not been evaluated so far. Methods and results The frequency of 12 comorbidities and their impact on NYHA class and SF-36 physical functioning score (SF-36 PF) were evaluated in 1,294 patients with HFpEF and 2,785 with HFrEF. HFpEF patients had lower NYHA class (2.0 ± 0.6 vs. 2.4 ± 0.6, p 0.05) negative effect in both groups. Obesity, coronary artery disease and peripheral arterial occlusive disease exerted a significantly (p < 0.05) more adverse effect in HFpEF, while hypertension and hyperlipidemia were associated with fewer (p < 0.05) symptoms in HFrEF only. The total impact of comorbidities on NYHA (AUC for prediction of NYHA III/IV vs. I/II) and SF-36 PF (r 2) in multivariate analyses was approximately 1.5-fold higher in HFpEF, and also much stronger than the impact of a 10% decrease in ejection fraction in HFrEF or a 5 mm decrease in left ventricular end-diastolic diameter in HFpEF. Conclusion The impact of comorbidities on physical impairment is higher in HFpEF than in HFrEF. This should be considered in the differential diagnosis and in the treatment of patients with HFpEF

    Ready ... Go: Amplitude of the fMRI Signal Encodes Expectation of Cue Arrival Time

    Get PDF
    What happens when the brain awaits a signal of uncertain arrival time, as when a sprinter waits for the starting pistol? And what happens just after the starting pistol fires? Using functional magnetic resonance imaging (fMRI), we have discovered a novel correlate of temporal expectations in several brain regions, most prominently in the supplementary motor area (SMA). Contrary to expectations, we found little fMRI activity during the waiting period; however, a large signal appears after the “go” signal, the amplitude of which reflects learned expectations about the distribution of possible waiting times. Specifically, the amplitude of the fMRI signal appears to encode a cumulative conditional probability, also known as the cumulative hazard function. The fMRI signal loses its dependence on waiting time in a “countdown” condition in which the arrival time of the go cue is known in advance, suggesting that the signal encodes temporal probabilities rather than simply elapsed time. The dependence of the signal on temporal expectation is present in “no-go” conditions, demonstrating that the effect is not a consequence of motor output. Finally, the encoding is not dependent on modality, operating in the same manner with auditory or visual signals. This finding extends our understanding of the relationship between temporal expectancy and measurable neural signals

    Dopamine Modulates Persistent Synaptic Activity and Enhances the Signal-to-Noise Ratio in the Prefrontal Cortex

    Get PDF
    The importance of dopamine (DA) for prefrontal cortical (PFC) cognitive functions is widely recognized, but its mechanisms of action remain controversial. DA is thought to increase signal gain in active networks according to an inverted U dose-response curve, and these effects may depend on both tonic and phasic release of DA from midbrain ventral tegmental area (VTA) neurons.We used patch-clamp recordings in organotypic co-cultures of the PFC, hippocampus and VTA to study DA modulation of spontaneous network activity in the form of Up-states and signals in the form of synchronous EPSP trains. These cultures possessed a tonic DA level and stimulation of the VTA evoked DA transients within the PFC. The addition of high (≥1 µM) concentrations of exogenous DA to the cultures reduced Up-states and diminished excitatory synaptic inputs (EPSPs) evoked during the Down-state. Increasing endogenous DA via bath application of cocaine also reduced Up-states. Lower concentrations of exogenous DA (0.1 µM) had no effect on the up-state itself, but they selectively increased the efficiency of a train of EPSPs to evoke spikes during the Up-state. When the background DA was eliminated by depleting DA with reserpine and alpha-methyl-p-tyrosine, or by preparing corticolimbic co-cultures without the VTA slice, Up-states could be enhanced by low concentrations (0.1–1 µM) of DA that had no effect in the VTA containing cultures. Finally, in spite of the concentration-dependent effects on Up-states, exogenous DA at all but the lowest concentrations increased intracellular current-pulse evoked firing in all cultures underlining the complexity of DA's effects in an active network.Taken together, these data show concentration-dependent effects of DA on global PFC network activity and they demonstrate a mechanism through which optimal levels of DA can modulate signal gain to support cognitive functioning

    Plasticity in D1-Like Receptor Expression Is Associated with Different Components of Cognitive Processes

    Get PDF
    Dopamine D1-like receptors consist of D1 (D1A) and D5 (D1B) receptors and play a key role in working memory. However, their possibly differential contribution to working memory is unclear. We combined a working memory training protocol with a stepwise increase of cognitive subcomponents and real-time RT-PCR analysis of dopamine receptor expression in pigeons to identify molecular changes that accompany training of isolated cognitive subfunctions. In birds, the D1-like receptor family is extended and consists of the D1A, D1B, and D1D receptors. Our data show that D1B receptor plasticity follows a training that includes active mental maintenance of information, whereas D1A and D1D receptor plasticity in addition accompanies learning of stimulus-response associations. Plasticity of D1-like receptors plays no role for processes like response selection and stimulus discrimination. None of the tasks altered D2 receptor expression. Our study shows that different cognitive components of working memory training have distinguishable effects on D1-like receptor expression

    A Visual Metaphor Describing Neural Dynamics in Schizophrenia

    Get PDF
    Background: In many scientific disciplines the use of a metaphor as an heuristic aid is not uncommon. A well known example in somatic medicine is the 'defense army metaphor' used to characterize the immune system. In fact, probably a large part of the everyday work of doctors consists of 'translating' scientific and clinical information (i.e. causes of disease, percentage of succes versus risk of side-effects) into information tailored to the needs and capacities of the individual patient. The ability to do so in an effective way is at least partly what makes a clinician a good communicator. Schizophrenia is a severe psychiatric disorder which affects approximately 1% of the population. Over the last two decades a large amount of molecular-biological, imaging and genetic data have been accumulated regarding the biological underpinnings of schizophrenia. However, it remains difficult to understand how the characteristic symptoms of schizophrenia such as hallucinations and delusions are related to disturbances on the molecular-biological level. In general, psychiatry seems to lack a conceptual framework with sufficient explanatory power to link the mental- and molecular-biological domains. Methodology/Principal Findings: Here, we present an essay-like study in which we propose to use visualized concepts stemming from the theory on dynamical complex systems as a 'visual metaphor' to bridge the mental- and molecular-biological domains in schizophrenia. We first describe a computer model of neural information processing; we show how the information processing in this model can be visualized, using concepts from the theory on complex systems. We then describe two computer models which have been used to investigate the primary theory on schizophrenia, the neurodevelopmental model, and show how disturbed information processing in these two computer models can be presented in terms of the visual metaphor previously described. Finally, we describe the effects of dopamine neuromodulation, of which disturbances have been frequently described in schizophrenia, in terms of the same visualized metaphor. Conclusions/Significance: The conceptual framework and metaphor described offers a heuristic tool to understand the relationship between the mental- and molecular-biological domains in an intuitive way. The concepts we present may serve to facilitate communicatio
    corecore