988 research outputs found

    Can we identify non-stationary dynamics of trial-to-trial variability?"

    Get PDF
    Identifying sources of the apparent variability in non-stationary scenarios is a fundamental problem in many biological data analysis settings. For instance, neurophysiological responses to the same task often vary from each repetition of the same experiment (trial) to the next. The origin and functional role of this observed variability is one of the fundamental questions in neuroscience. The nature of such trial-to-trial dynamics however remains largely elusive to current data analysis approaches. A range of strategies have been proposed in modalities such as electro-encephalography but gaining a fundamental insight into latent sources of trial-to-trial variability in neural recordings is still a major challenge. In this paper, we present a proof-of-concept study to the analysis of trial-to-trial variability dynamics founded on non-autonomous dynamical systems. At this initial stage, we evaluate the capacity of a simple statistic based on the behaviour of trajectories in classification settings, the trajectory coherence, in order to identify trial-to-trial dynamics. First, we derive the conditions leading to observable changes in datasets generated by a compact dynamical system (the Duffing equation). This canonical system plays the role of a ubiquitous model of non-stationary supervised classification problems. Second, we estimate the coherence of class-trajectories in empirically reconstructed space of system states. We show how this analysis can discern variations attributable to non-autonomous deterministic processes from stochastic fluctuations. The analyses are benchmarked using simulated and two different real datasets which have been shown to exhibit attractor dynamics. As an illustrative example, we focused on the analysis of the rat's frontal cortex ensemble dynamics during a decision-making task. Results suggest that, in line with recent hypotheses, rather than internal noise, it is the deterministic trend which most likely underlies the observed trial-to-trial variability. Thus, the empirical tool developed within this study potentially allows us to infer the source of variability in in-vivo neural recordings

    Synaptic potentiation facilitates memory-like attractor dynamics in cultured in vitro hippocampal networks

    Get PDF
    Collective rhythmic dynamics from neurons is vital for cognitive functions such as memory formation but how neurons self-organize to produce such activity is not well understood. Attractor-based models have been successfully implemented as a theoretical framework for memory storage in networks of neurons. Activity-dependent modification of synaptic transmission is thought to be the physiological basis of learning and memory. The goal of this study is to demonstrate that using a pharmacological perturbation on in vitro networks of hippocampal neurons that has been shown to increase synaptic strength follows the dynamical postulates theorized by attractor models. We use a grid of extracellular electrodes to study changes in network activity after this perturbation and show that there is a persistent increase in overall spiking and bursting activity after treatment. This increase in activity appears to recruit more "errant" spikes into bursts. Lastly, phase plots indicate a conserved activity pattern suggesting that the network is operating in a stable dynamical state

    Dynamical principles in neuroscience

    Full text link
    Dynamical modeling of neural systems and brain functions has a history of success over the last half century. This includes, for example, the explanation and prediction of some features of neural rhythmic behaviors. Many interesting dynamical models of learning and memory based on physiological experiments have been suggested over the last two decades. Dynamical models even of consciousness now exist. Usually these models and results are based on traditional approaches and paradigms of nonlinear dynamics including dynamical chaos. Neural systems are, however, an unusual subject for nonlinear dynamics for several reasons: (i) Even the simplest neural network, with only a few neurons and synaptic connections, has an enormous number of variables and control parameters. These make neural systems adaptive and flexible, and are critical to their biological function. (ii) In contrast to traditional physical systems described by well-known basic principles, first principles governing the dynamics of neural systems are unknown. (iii) Many different neural systems exhibit similar dynamics despite having different architectures and different levels of complexity. (iv) The network architecture and connection strengths are usually not known in detail and therefore the dynamical analysis must, in some sense, be probabilistic. (v) Since nervous systems are able to organize behavior based on sensory inputs, the dynamical modeling of these systems has to explain the transformation of temporal information into combinatorial or combinatorial-temporal codes, and vice versa, for memory and recognition. In this review these problems are discussed in the context of addressing the stimulating questions: What can neuroscience learn from nonlinear dynamics, and what can nonlinear dynamics learn from neuroscience?This work was supported by NSF Grant No. NSF/EIA-0130708, and Grant No. PHY 0414174; NIH Grant No. 1 R01 NS50945 and Grant No. NS40110; MEC BFI2003-07276, and FundaciĂłn BBVA

    Locally embedded presages of global network bursts

    Full text link
    Spontaneous, synchronous bursting of neural population is a widely observed phenomenon in nervous networks, which is considered important for functions and dysfunctions of the brain. However, how the global synchrony across a large number of neurons emerges from an initially non-bursting network state is not fully understood. In this study, we develop a new state-space reconstruction method combined with high-resolution recordings of cultured neurons. This method extracts deterministic signatures of upcoming global bursts in "local" dynamics of individual neurons during non-bursting periods. We find that local information within a single-cell time series can compare with or even outperform the global mean field activity for predicting future global bursts. Moreover, the inter-cell variability in the burst predictability is found to reflect the network structure realized in the non-bursting periods. These findings demonstrate the deterministic mechanisms underlying the locally concentrated early-warnings of the global state transition in self-organized networks

    Reconstruction of Underlying Nonlinear Deterministic Dynamics Embedded in Noisy Spike Trains

    Get PDF
    An experimentally recorded time series formed by the exact times of occurrence of the neuronal spikes (spike train) is likely to be affected by observational noise that provokes events mistakenly confused with neuronal discharges, as well as missed detection of genuine neuronal discharges. The points of the spike train may also suffer a slight jitter in time due to stochastic processes in synaptic transmission and to delays in the detecting devices. This study presents a procedure aimed at filtering the embedded noise (denoising the spike trains) the spike trains based on the hypothesis that recurrent temporal patterns of spikes are likely to represent the robust expression of a dynamic process associated with the information carried by the spike train. The rationale of this approach is tested on simulated spike trains generated by several nonlinear deterministic dynamical systems with embedded observational noise. The application of the pattern grouping algorithm (PGA) to the noisy time series allows us to extract a set of points that form the reconstructed time series. Three new indices are defined for assessment of the performance of the denoising procedure. The results show that this procedure may indeed retrieve the most relevant temporal features of the original dynamics. Moreover, we observe that additional spurious events affect the performance to a larger extent than the missing of original points. Thus, a strict criterion for the detection of spikes under experimental conditions, thus reducing the number of spurious spikes, may raise the possibility to apply PGA to detect endogenous deterministic dynamics in the spike train otherwise masked by the observational nois

    Fractals in the Nervous System: conceptual Implications for Theoretical Neuroscience

    Get PDF
    This essay is presented with two principal objectives in mind: first, to document the prevalence of fractals at all levels of the nervous system, giving credence to the notion of their functional relevance; and second, to draw attention to the as yet still unresolved issues of the detailed relationships among power law scaling, self-similarity, and self-organized criticality. As regards criticality, I will document that it has become a pivotal reference point in Neurodynamics. Furthermore, I will emphasize the not yet fully appreciated significance of allometric control processes. For dynamic fractals, I will assemble reasons for attributing to them the capacity to adapt task execution to contextual changes across a range of scales. The final Section consists of general reflections on the implications of the reviewed data, and identifies what appear to be issues of fundamental importance for future research in the rapidly evolving topic of this review

    Modular deconstruction reveals the dynamical and physical building blocks of a locomotion motor program

    Get PDF
    The neural substrates of motor programs are only well understood for small, dedicated circuits. Here we investigate how a motor program is constructed within a large network. We imaged populations of neurons in the Aplysia pedal ganglion during execution of a locomotion motor program. We found that the program was built from a very small number of dynamical building blocks, including both neural ensembles and low-dimensional rotational dynamics. These map onto physically discrete regions of the ganglion, so that the motor program has a corresponding modular organization in both dynamical and physical space. Using this dynamic map, we identify the population potentially implementing the rhythmic pattern generator and find that its activity physically traces a looped trajectory, recapitulating its low-dimensional rotational dynamics. Our results suggest that, even in simple invertebrates, neural motor programs are implemented by large, distributed networks containing multiple dynamical systems

    Storage of phase-coded patterns via STDP in fully-connected and sparse network: a study of the network capacity

    Get PDF
    We study the storage and retrieval of phase-coded patterns as stable dynamical attractors in recurrent neural networks, for both an analog and a integrate-and-fire spiking model. The synaptic strength is determined by a learning rule based on spike-time-dependent plasticity, with an asymmetric time window depending on the relative timing between pre- and post-synaptic activity. We store multiple patterns and study the network capacity. For the analog model, we find that the network capacity scales linearly with the network size, and that both capacity and the oscillation frequency of the retrieval state depend on the asymmetry of the learning time window. In addition to fully-connected networks, we study sparse networks, where each neuron is connected only to a small number z << N of other neurons. Connections can be short range, between neighboring neurons placed on a regular lattice, or long range, between randomly chosen pairs of neurons. We find that a small fraction of long range connections is able to amplify the capacity of the network. This imply that a small-world-network topology is optimal, as a compromise between the cost of long range connections and the capacity increase. Also in the spiking integrate and fire model the crucial result of storing and retrieval of multiple phase-coded patterns is observed. The capacity of the fully-connected spiking network is investigated, together with the relation between oscillation frequency of retrieval state and window asymmetry
    • …
    corecore