1,203 research outputs found

    Recurrent kernel machines : computing with infinite echo state networks

    Get PDF
    Echo state networks (ESNs) are large, random recurrent neural networks with a single trained linear readout layer. Despite the untrained nature of the recurrent weights, they are capable of performing universal computations on temporal input data, which makes them interesting for both theoretical research and practical applications. The key to their success lies in the fact that the network computes a broad set of nonlinear, spatiotemporal mappings of the input data, on which linear regression or classification can easily be performed. One could consider the reservoir as a spatiotemporal kernel, in which the mapping to a high-dimensional space is computed explicitly. In this letter, we build on this idea and extend the concept of ESNs to infinite-sized recurrent neural networks, which can be considered recursive kernels that subsequently can be used to create recursive support vector machines. We present the theoretical framework, provide several practical examples of recursive kernels, and apply them to typical temporal tasks

    Graded, Dynamically Routable Information Processing with Synfire-Gated Synfire Chains

    Full text link
    Coherent neural spiking and local field potentials are believed to be signatures of the binding and transfer of information in the brain. Coherent activity has now been measured experimentally in many regions of mammalian cortex. Synfire chains are one of the main theoretical constructs that have been appealed to to describe coherent spiking phenomena. However, for some time, it has been known that synchronous activity in feedforward networks asymptotically either approaches an attractor with fixed waveform and amplitude, or fails to propagate. This has limited their ability to explain graded neuronal responses. Recently, we have shown that pulse-gated synfire chains are capable of propagating graded information coded in mean population current or firing rate amplitudes. In particular, we showed that it is possible to use one synfire chain to provide gating pulses and a second, pulse-gated synfire chain to propagate graded information. We called these circuits synfire-gated synfire chains (SGSCs). Here, we present SGSCs in which graded information can rapidly cascade through a neural circuit, and show a correspondence between this type of transfer and a mean-field model in which gating pulses overlap in time. We show that SGSCs are robust in the presence of variability in population size, pulse timing and synaptic strength. Finally, we demonstrate the computational capabilities of SGSC-based information coding by implementing a self-contained, spike-based, modular neural circuit that is triggered by, then reads in streaming input, processes the input, then makes a decision based on the processed information and shuts itself down

    Evolution of Theories of Mind

    Get PDF
    This paper studies the evolution of peoples' models of how other people think -- their theories of mind. First, this is formalized within the level-k model, which postulates a hierarchy of types, such that type k plays a k times iterated best response to the uniform distribution. It is found that, under plausible conditions, lower types co-exist with higher types. The results are extended to a model of learning, in which type k plays a k times iterated best response the average of past play. The results are also extended to the cognitive hierarchy model, and to the introduction of a type that plays a Nash equilibrium.Theory of Mind; Evolution; Learning; Level-k; Fictitious Play; Cognitive Hierarchy

    Correlation-based model of artificially induced plasticity in motor cortex by a bidirectional brain-computer interface

    Full text link
    Experiments show that spike-triggered stimulation performed with Bidirectional Brain-Computer-Interfaces (BBCI) can artificially strengthen connections between separate neural sites in motor cortex (MC). What are the neuronal mechanisms responsible for these changes and how does targeted stimulation by a BBCI shape population-level synaptic connectivity? The present work describes a recurrent neural network model with probabilistic spiking mechanisms and plastic synapses capable of capturing both neural and synaptic activity statistics relevant to BBCI conditioning protocols. When spikes from a neuron recorded at one MC site trigger stimuli at a second target site after a fixed delay, the connections between sites are strengthened for spike-stimulus delays consistent with experimentally derived spike time dependent plasticity (STDP) rules. However, the relationship between STDP mechanisms at the level of networks, and their modification with neural implants remains poorly understood. Using our model, we successfully reproduces key experimental results and use analytical derivations, along with novel experimental data. We then derive optimal operational regimes for BBCIs, and formulate predictions concerning the efficacy of spike-triggered stimulation in different regimes of cortical activity.Comment: 35 pages, 9 figure

    Can we identify non-stationary dynamics of trial-to-trial variability?"

    Get PDF
    Identifying sources of the apparent variability in non-stationary scenarios is a fundamental problem in many biological data analysis settings. For instance, neurophysiological responses to the same task often vary from each repetition of the same experiment (trial) to the next. The origin and functional role of this observed variability is one of the fundamental questions in neuroscience. The nature of such trial-to-trial dynamics however remains largely elusive to current data analysis approaches. A range of strategies have been proposed in modalities such as electro-encephalography but gaining a fundamental insight into latent sources of trial-to-trial variability in neural recordings is still a major challenge. In this paper, we present a proof-of-concept study to the analysis of trial-to-trial variability dynamics founded on non-autonomous dynamical systems. At this initial stage, we evaluate the capacity of a simple statistic based on the behaviour of trajectories in classification settings, the trajectory coherence, in order to identify trial-to-trial dynamics. First, we derive the conditions leading to observable changes in datasets generated by a compact dynamical system (the Duffing equation). This canonical system plays the role of a ubiquitous model of non-stationary supervised classification problems. Second, we estimate the coherence of class-trajectories in empirically reconstructed space of system states. We show how this analysis can discern variations attributable to non-autonomous deterministic processes from stochastic fluctuations. The analyses are benchmarked using simulated and two different real datasets which have been shown to exhibit attractor dynamics. As an illustrative example, we focused on the analysis of the rat's frontal cortex ensemble dynamics during a decision-making task. Results suggest that, in line with recent hypotheses, rather than internal noise, it is the deterministic trend which most likely underlies the observed trial-to-trial variability. Thus, the empirical tool developed within this study potentially allows us to infer the source of variability in in-vivo neural recordings
    • …
    corecore