14,654 research outputs found
Lifelong Learning of Spatiotemporal Representations with Dual-Memory Recurrent Self-Organization
Artificial autonomous agents and robots interacting in complex environments
are required to continually acquire and fine-tune knowledge over sustained
periods of time. The ability to learn from continuous streams of information is
referred to as lifelong learning and represents a long-standing challenge for
neural network models due to catastrophic forgetting. Computational models of
lifelong learning typically alleviate catastrophic forgetting in experimental
scenarios with given datasets of static images and limited complexity, thereby
differing significantly from the conditions artificial agents are exposed to.
In more natural settings, sequential information may become progressively
available over time and access to previous experience may be restricted. In
this paper, we propose a dual-memory self-organizing architecture for lifelong
learning scenarios. The architecture comprises two growing recurrent networks
with the complementary tasks of learning object instances (episodic memory) and
categories (semantic memory). Both growing networks can expand in response to
novel sensory experience: the episodic memory learns fine-grained
spatiotemporal representations of object instances in an unsupervised fashion
while the semantic memory uses task-relevant signals to regulate structural
plasticity levels and develop more compact representations from episodic
experience. For the consolidation of knowledge in the absence of external
sensory input, the episodic memory periodically replays trajectories of neural
reactivations. We evaluate the proposed model on the CORe50 benchmark dataset
for continuous object recognition, showing that we significantly outperform
current methods of lifelong learning in three different incremental learning
scenario
Neural mechanisms of social learning in the female mouse
Social interactions are often powerful drivers of learning. In female mice, mating creates a long-lasting sensory memory for the pheromones of the stud male that alters neuroendocrine responses to his chemosignals for many weeks. The cellular and synaptic correlates of pheromonal learning, however, remain unclear. We examined local circuit changes in the accessory olfactory bulb (AOB) using targeted ex vivo recordings of mating-activated neurons tagged with a fluorescent reporter. Imprinting led to striking plasticity in the intrinsic membrane excitability of projection neurons (mitral cells, MCs) that dramatically curtailed their responsiveness, suggesting a novel cellular substrate for pheromonal learning. Plasticity was selectively expressed in the MC ensembles activated by the stud male, consistent with formation of memories for specific individuals. Finally, MC excitability gained atypical activity-dependence whose slow dynamics strongly attenuated firing on timescales of several minutes. This unusual form of AOB plasticity may act to filter sustained or repetitive sensory signals.R21 DC013894 - NIDCD NIH HH
Recommended from our members
Lack of Pattern Separation in Sensory Inputs to the Olfactory Bulb during Perceptual Learning.
Recent studies revealed changes in odor representations in the olfactory bulb during active olfactory learning (Chu et al., 2016; Yamada et al., 2017). Specifically, mitral cell ensemble responses to very similar odorant mixtures sparsened and became more distinguishable as mice learned to discriminate the odorants over days (Chu et al., 2016). In this study, we explored whether changes in the sensory inputs to the bulb underlie the observed changes in mitral cell responses. Using two-photon calcium imaging to monitor the odor responses of the olfactory sensory neuron (OSN) axon terminals in the glomeruli of the olfactory bulb during a discrimination task, we found that OSN inputs to the bulb are stable during discrimination learning. During one week of training to discriminate between very similar odorant mixtures in a Go/No-go task, OSN responses did not show significant sparsening, and the responses to the trained similar odorants did not diverge throughout training. These results suggest that the adaptive changes of mitral cell responses during perceptual learning are ensured by mechanisms downstream of OSN input, possibly in local circuits within olfactory bulb
Synaptic state matching: a dynamical architecture for predictive internal representation and feature perception
Here we consider the possibility that a fundamental function of sensory cortex is the generation of an internal simulation of sensory environment in real-time. A logical elaboration of this idea leads to a dynamical neural architecture that oscillates between two fundamental network states, one driven by external input, and the other by recurrent synaptic drive in the absence of sensory input. Synaptic strength is modified by a proposed synaptic state matching (SSM) process that ensures equivalence of spike statistics between the two network states. Remarkably, SSM, operating locally at individual synapses, generates accurate and stable network-level predictive internal representations, enabling pattern completion and unsupervised feature detection from noisy sensory input. SSM is a biologically plausible substrate for learning and memory because it brings together sequence learning, feature detection, synaptic homeostasis, and network oscillations under a single parsimonious computational framework. Beyond its utility as a potential model of cortical computation, artificial networks based on this principle have remarkable capacity for internalizing dynamical systems, making them useful in a variety of application domains including time-series prediction and machine intelligence
Seven properties of self-organization in the human brain
The principle of self-organization has acquired a fundamental significance in the newly emerging field of computational philosophy. Self-organizing systems have been described in various domains in science and philosophy including physics, neuroscience, biology and medicine, ecology, and sociology. While system architecture and their general purpose may depend on domain-specific concepts and definitions, there are (at least) seven key properties of self-organization clearly identified in brain systems: 1) modular connectivity, 2) unsupervised learning, 3) adaptive ability, 4) functional resiliency, 5) functional plasticity, 6) from-local-to-global functional organization, and 7) dynamic system growth. These are defined here in the light of insight from neurobiology, cognitive neuroscience and Adaptive Resonance Theory (ART), and physics to show that self-organization achieves stability and functional plasticity while minimizing structural system complexity. A specific example informed by empirical research is discussed to illustrate how modularity, adaptive learning, and dynamic network growth enable stable yet plastic somatosensory representation for human grip force control. Implications for the design of “strong” artificial intelligence in robotics are brought forward
Nonlinear Hebbian learning as a unifying principle in receptive field formation
The development of sensory receptive fields has been modeled in the past by a
variety of models including normative models such as sparse coding or
independent component analysis and bottom-up models such as spike-timing
dependent plasticity or the Bienenstock-Cooper-Munro model of synaptic
plasticity. Here we show that the above variety of approaches can all be
unified into a single common principle, namely Nonlinear Hebbian Learning. When
Nonlinear Hebbian Learning is applied to natural images, receptive field shapes
were strongly constrained by the input statistics and preprocessing, but
exhibited only modest variation across different choices of nonlinearities in
neuron models or synaptic plasticity rules. Neither overcompleteness nor sparse
network activity are necessary for the development of localized receptive
fields. The analysis of alternative sensory modalities such as auditory models
or V2 development lead to the same conclusions. In all examples, receptive
fields can be predicted a priori by reformulating an abstract model as
nonlinear Hebbian learning. Thus nonlinear Hebbian learning and natural
statistics can account for many aspects of receptive field formation across
models and sensory modalities
Recommended from our members
Large-scale changes in cortical dynamics triggered by repetitive somatosensory electrical stimulation.
BackgroundRepetitive somatosensory electrical stimulation (SES) of forelimb peripheral nerves is a promising therapy; studies have shown that SES can improve motor function in stroke subjects with chronic deficits. However, little is known about how SES can directly modulate neural dynamics. Past studies using SES have primarily used noninvasive methods in human subjects. Here we used electrophysiological recordings from the rodent primary motor cortex (M1) to assess how SES affects neural dynamics at the level of single neurons as well as at the level of mesoscale dynamics.MethodsWe performed acute extracellular recordings in 7 intact adult Long Evans rats under ketamine-xylazine anesthesia while they received transcutaneous SES. We recorded single unit spiking and local field potentials (LFP) in the M1 contralateral to the stimulated arm. We then compared neural firing rate, spike-field coherence (SFC), and power spectral density (PSD) before and after stimulation.ResultsFollowing SES, the firing rate of a majority of neurons changed significantly from their respective baseline values. There was, however, a diversity of responses; some neurons increased while others decreased their firing rates. Interestingly, SFC, a measure of how a neuron's firing is coupled to mesoscale oscillatory dynamics, increased specifically in the δ-band, also known as the low frequency band (0.3- 4 Hz). This increase appeared to be driven by a change in the phase-locking of broad-spiking, putative pyramidal neurons. These changes in the low frequency range occurred without a significant change in the overall PSD.ConclusionsRepetitive SES significantly and persistently altered the local cortical dynamics of M1 neurons, changing both firing rates as well as the SFC magnitude in the δ-band. Thus, SES altered the neural firing and coupling to ongoing mesoscale dynamics. Our study provides evidence that SES can directly modulate cortical dynamics
Towards a Theory of the Laminar Architecture of Cerebral Cortex: Computational Clues from the Visual System
One of the most exciting and open research frontiers in neuroscience is that of seeking to understand the functional roles of the layers of cerebral cortex. New experimental techniques for probing the laminar circuitry of cortex have recently been developed, opening up novel opportunities for investigating ho1v its six-layered architecture contributes to perception and cognition. The task of trying to interpret this complex structure can be facilitated by theoretical analyses of the types of computations that cortex is carrying out, and of how these might be implemented in specific cortical circuits. We have recently developed a detailed neural model of how the parvocellular stream of the visual cortex utilizes its feedforward, feedback, and horizontal interactions for purposes of visual filtering, attention, and perceptual grouping. This model, called LAMINART, shows how these perceptual processes relate to the mechanisms which ensure stable development of cortical circuits in the infant, and to the continued stability of learning in the adult. The present article reviews this laminar theory of visual cortex, considers how it may be generalized towards a more comprehensive theory that encompasses other cortical areas and cognitive processes, and shows how its laminar framework generates a variety of testable predictions.Defense Advanced Research Projects Agency and the Office of Naval Research (N00014-95-0409); National Science Foundation (IRI 94-01659); Office of Naval Research (N00014-92-1-1309, N00014-95-1-0657
- …