347 research outputs found
Complexity without chaos: Plasticity within random recurrent networks generates robust timing and motor control
It is widely accepted that the complex dynamics characteristic of recurrent
neural circuits contributes in a fundamental manner to brain function. Progress
has been slow in understanding and exploiting the computational power of
recurrent dynamics for two main reasons: nonlinear recurrent networks often
exhibit chaotic behavior and most known learning rules do not work in robust
fashion in recurrent networks. Here we address both these problems by
demonstrating how random recurrent networks (RRN) that initially exhibit
chaotic dynamics can be tuned through a supervised learning rule to generate
locally stable neural patterns of activity that are both complex and robust to
noise. The outcome is a novel neural network regime that exhibits both
transiently stable and chaotic trajectories. We further show that the recurrent
learning rule dramatically increases the ability of RRNs to generate complex
spatiotemporal motor patterns, and accounts for recent experimental data
showing a decrease in neural variability in response to stimulus onset
Distortions of Subjective Time Perception Within and Across Senses
Background: The ability to estimate the passage of time is of fundamental importance for perceptual and cognitive processes. One experience of time is the perception of duration, which is not isomorphic to physical duration and can be distorted by a number of factors. Yet, the critical features generating these perceptual shifts in subjective duration are not understood.
Methodology/Findings: We used prospective duration judgments within and across sensory modalities to examine the effect of stimulus predictability and feature change on the perception of duration. First, we found robust distortions of perceived duration in auditory, visual and auditory-visual presentations despite the predictability of the feature changes in the stimuli. For example, a looming disc embedded in a series of steady discs led to time dilation, whereas a steady disc embedded in a series of looming discs led to time compression. Second, we addressed whether visual (auditory) inputs could alter the perception of duration of auditory (visual) inputs. When participants were presented with incongruent audio-visual stimuli, the perceived duration of auditory events could be shortened or lengthened by the presence of conflicting visual information; however, the perceived duration of visual events was seldom distorted by the presence of auditory information and was never perceived shorter than their actual durations.
Conclusions/Significance: These results support the existence of multisensory interactions in the perception of duration and, importantly, suggest that vision can modify auditory temporal perception in a pure timing task. Insofar as distortions in subjective duration can neither be accounted for by the unpredictability of an auditory, visual or auditory-visual event, we propose that it is the intrinsic features of the stimulus that critically affect subjective time distortions
Attentive Learning of Sequential Handwriting Movements: A Neural Network Model
Defense Advanced research Projects Agency and the Office of Naval Research (N00014-95-1-0409, N00014-92-J-1309); National Science Foundation (IRI-97-20333); National Institutes of Health (I-R29-DC02952-01)
Challenges in Complex Systems Science
FuturICT foundations are social science, complex systems science, and ICT.
The main concerns and challenges in the science of complex systems in the
context of FuturICT are laid out in this paper with special emphasis on the
Complex Systems route to Social Sciences. This include complex systems having:
many heterogeneous interacting parts; multiple scales; complicated transition
laws; unexpected or unpredicted emergence; sensitive dependence on initial
conditions; path-dependent dynamics; networked hierarchical connectivities;
interaction of autonomous agents; self-organisation; non-equilibrium dynamics;
combinatorial explosion; adaptivity to changing environments; co-evolving
subsystems; ill-defined boundaries; and multilevel dynamics. In this context,
science is seen as the process of abstracting the dynamics of systems from
data. This presents many challenges including: data gathering by large-scale
experiment, participatory sensing and social computation, managing huge
distributed dynamic and heterogeneous databases; moving from data to dynamical
models, going beyond correlations to cause-effect relationships, understanding
the relationship between simple and comprehensive models with appropriate
choices of variables, ensemble modeling and data assimilation, modeling systems
of systems of systems with many levels between micro and macro; and formulating
new approaches to prediction, forecasting, and risk, especially in systems that
can reflect on and change their behaviour in response to predictions, and
systems whose apparently predictable behaviour is disrupted by apparently
unpredictable rare or extreme events. These challenges are part of the FuturICT
agenda
Recognizing recurrent neural networks (rRNN): Bayesian inference for recurrent neural networks
Recurrent neural networks (RNNs) are widely used in computational
neuroscience and machine learning applications. In an RNN, each neuron computes
its output as a nonlinear function of its integrated input. While the
importance of RNNs, especially as models of brain processing, is undisputed, it
is also widely acknowledged that the computations in standard RNN models may be
an over-simplification of what real neuronal networks compute. Here, we suggest
that the RNN approach may be made both neurobiologically more plausible and
computationally more powerful by its fusion with Bayesian inference techniques
for nonlinear dynamical systems. In this scheme, we use an RNN as a generative
model of dynamic input caused by the environment, e.g. of speech or kinematics.
Given this generative RNN model, we derive Bayesian update equations that can
decode its output. Critically, these updates define a 'recognizing RNN' (rRNN),
in which neurons compute and exchange prediction and prediction error messages.
The rRNN has several desirable features that a conventional RNN does not have,
for example, fast decoding of dynamic stimuli and robustness to initial
conditions and noise. Furthermore, it implements a predictive coding scheme for
dynamic inputs. We suggest that the Bayesian inversion of recurrent neural
networks may be useful both as a model of brain function and as a machine
learning tool. We illustrate the use of the rRNN by an application to the
online decoding (i.e. recognition) of human kinematics
Searching for plasticity in dissociated cortical cultures on multi-electrode arrays
We attempted to induce functional plasticity in dense cultures of cortical cells using stimulation through extracellular electrodes embedded in the culture dish substrate (multi-electrode arrays, or MEAs). We looked for plasticity expressed in changes in spontaneous burst patterns, and in array-wide response patterns to electrical stimuli, following several induction protocols related to those used in the literature, as well as some novel ones. Experiments were performed with spontaneous culture-wide bursting suppressed by either distributed electrical stimulation or by elevated extracellular magnesium concentrations as well as with spontaneous bursting untreated. Changes concomitant with induction were no larger in magnitude than changes that occurred spontaneously, except in one novel protocol in which spontaneous bursts were quieted using distributed electrical stimulation
How spiking neurons give rise to a temporal-feature map
A temporal-feature map is a topographic neuronal representation of temporal attributes of phenomena or objects that occur in the outside world. We explain the evolution of such maps by means of a spike-based Hebbian learning rule in conjunction with a presynaptically unspecific contribution in that, if a synapse changes, then all other synapses connected to the same axon change by a small fraction as well. The learning equation is solved for the case of an array of Poisson neurons. We discuss the evolution of a temporal-feature map and the synchronization of the single cells’ synaptic structures, in dependence upon the strength of presynaptic unspecific learning. We also give an upper bound for the magnitude of the presynaptic interaction by estimating its impact on the noise level of synaptic growth. Finally, we compare the results with those obtained from a learning equation for nonlinear neurons and show that synaptic structure formation may profit
from the nonlinearity
Activity in perceptual classification networks as a basis for human subjective time perception
Despite being a fundamental dimension of experience, how the human brain generates the perception of time remains unknown. Here, we provide a novel explanation for how human time perception might be accomplished, based on non-temporal perceptual classification processes. To demonstrate this proposal, we build an artificial neural system centred on a feed-forward image classification network, functionally similar to human visual processing. In this system, input videos of natural scenes drive changes in network activation, and accumulation of salient changes in activation are used to estimate duration. Estimates produced by this system match human reports made about the same videos, replicating key qualitative biases, including differentiating between scenes of walking around a busy city or sitting in a cafe or office. Our approach provides a working model of duration perception from stimulus to estimation and presents a new direction for examining the foundations of this central aspect of human experience
Emergence of Connectivity Motifs in Networks of Model Neurons with Short- and Long-term Plastic Synapses
Recent evidence in rodent cerebral cortex and olfactory bulb suggests that short-term dynamics of excitatory synaptic transmission is correlated to stereotypical connectivity motifs. It was observed that neurons with short-term facilitating synapses form predominantly reciprocal pairwise connections, while neurons with short-term depressing synapses form unidirectional pairwise connections. The cause of these structural differences in synaptic microcircuits is unknown. We propose that these connectivity motifs emerge from the interactions between short-term synaptic dynamics (SD) and long-term spike-timing dependent plasticity (STDP). While the impact of STDP on SD was shown in vitro, the mutual interactions between STDP and SD in large networks are still the subject of intense research. We formulate a computational model by combining SD and STDP, which captures faithfully short- and long-term dependence on both spike times and frequency. As a proof of concept, we simulate recurrent networks of spiking neurons with random initial connection efficacies and where synapses are either all short-term facilitating or all depressing. For identical background inputs, and as a direct consequence of internally generated activity, we find that networks with depressing synapses evolve unidirectional connectivity motifs, while networks with facilitating synapses evolve reciprocal connectivity motifs. This holds for heterogeneous networks including both facilitating and depressing synapses. Our study highlights the conditions under which SD-STDP might the correlation between facilitation and reciprocal connectivity motifs, as well as between depression and unidirectional motifs. We further suggest experiments for the validation of the proposed mechanism
Spatially distributed dendritic resonance selectively filters synaptic input
© 2014 Laudanski et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.An important task performed by a neuron is the selection of relevant inputs from among thousands of synapses impinging on the dendritic tree. Synaptic plasticity enables this by strenghtening a subset of synapses that are, presumably, functionally relevant to the neuron. A different selection mechanism exploits the resonance of the dendritic membranes to preferentially filter synaptic inputs based on their temporal rates. A widely held view is that a neuron has one resonant frequency and thus can pass through one rate. Here we demonstrate through mathematical analyses and numerical simulations that dendritic resonance is inevitably a spatially distributed property; and therefore the resonance frequency varies along the dendrites, and thus endows neurons with a powerful spatiotemporal selection mechanism that is sensitive both to the dendritic location and the temporal structure of the incoming synaptic inputs.Peer reviewe
- …
