14,654 research outputs found

    Lifelong Learning of Spatiotemporal Representations with Dual-Memory Recurrent Self-Organization

    Get PDF
    Artificial autonomous agents and robots interacting in complex environments are required to continually acquire and fine-tune knowledge over sustained periods of time. The ability to learn from continuous streams of information is referred to as lifelong learning and represents a long-standing challenge for neural network models due to catastrophic forgetting. Computational models of lifelong learning typically alleviate catastrophic forgetting in experimental scenarios with given datasets of static images and limited complexity, thereby differing significantly from the conditions artificial agents are exposed to. In more natural settings, sequential information may become progressively available over time and access to previous experience may be restricted. In this paper, we propose a dual-memory self-organizing architecture for lifelong learning scenarios. The architecture comprises two growing recurrent networks with the complementary tasks of learning object instances (episodic memory) and categories (semantic memory). Both growing networks can expand in response to novel sensory experience: the episodic memory learns fine-grained spatiotemporal representations of object instances in an unsupervised fashion while the semantic memory uses task-relevant signals to regulate structural plasticity levels and develop more compact representations from episodic experience. For the consolidation of knowledge in the absence of external sensory input, the episodic memory periodically replays trajectories of neural reactivations. We evaluate the proposed model on the CORe50 benchmark dataset for continuous object recognition, showing that we significantly outperform current methods of lifelong learning in three different incremental learning scenario

    Neural mechanisms of social learning in the female mouse

    Get PDF
    Social interactions are often powerful drivers of learning. In female mice, mating creates a long-lasting sensory memory for the pheromones of the stud male that alters neuroendocrine responses to his chemosignals for many weeks. The cellular and synaptic correlates of pheromonal learning, however, remain unclear. We examined local circuit changes in the accessory olfactory bulb (AOB) using targeted ex vivo recordings of mating-activated neurons tagged with a fluorescent reporter. Imprinting led to striking plasticity in the intrinsic membrane excitability of projection neurons (mitral cells, MCs) that dramatically curtailed their responsiveness, suggesting a novel cellular substrate for pheromonal learning. Plasticity was selectively expressed in the MC ensembles activated by the stud male, consistent with formation of memories for specific individuals. Finally, MC excitability gained atypical activity-dependence whose slow dynamics strongly attenuated firing on timescales of several minutes. This unusual form of AOB plasticity may act to filter sustained or repetitive sensory signals.R21 DC013894 - NIDCD NIH HH

    Synaptic state matching: a dynamical architecture for predictive internal representation and feature perception

    Get PDF
    Here we consider the possibility that a fundamental function of sensory cortex is the generation of an internal simulation of sensory environment in real-time. A logical elaboration of this idea leads to a dynamical neural architecture that oscillates between two fundamental network states, one driven by external input, and the other by recurrent synaptic drive in the absence of sensory input. Synaptic strength is modified by a proposed synaptic state matching (SSM) process that ensures equivalence of spike statistics between the two network states. Remarkably, SSM, operating locally at individual synapses, generates accurate and stable network-level predictive internal representations, enabling pattern completion and unsupervised feature detection from noisy sensory input. SSM is a biologically plausible substrate for learning and memory because it brings together sequence learning, feature detection, synaptic homeostasis, and network oscillations under a single parsimonious computational framework. Beyond its utility as a potential model of cortical computation, artificial networks based on this principle have remarkable capacity for internalizing dynamical systems, making them useful in a variety of application domains including time-series prediction and machine intelligence

    Seven properties of self-organization in the human brain

    Get PDF
    The principle of self-organization has acquired a fundamental significance in the newly emerging field of computational philosophy. Self-organizing systems have been described in various domains in science and philosophy including physics, neuroscience, biology and medicine, ecology, and sociology. While system architecture and their general purpose may depend on domain-specific concepts and definitions, there are (at least) seven key properties of self-organization clearly identified in brain systems: 1) modular connectivity, 2) unsupervised learning, 3) adaptive ability, 4) functional resiliency, 5) functional plasticity, 6) from-local-to-global functional organization, and 7) dynamic system growth. These are defined here in the light of insight from neurobiology, cognitive neuroscience and Adaptive Resonance Theory (ART), and physics to show that self-organization achieves stability and functional plasticity while minimizing structural system complexity. A specific example informed by empirical research is discussed to illustrate how modularity, adaptive learning, and dynamic network growth enable stable yet plastic somatosensory representation for human grip force control. Implications for the design of “strong” artificial intelligence in robotics are brought forward

    Nonlinear Hebbian learning as a unifying principle in receptive field formation

    Get PDF
    The development of sensory receptive fields has been modeled in the past by a variety of models including normative models such as sparse coding or independent component analysis and bottom-up models such as spike-timing dependent plasticity or the Bienenstock-Cooper-Munro model of synaptic plasticity. Here we show that the above variety of approaches can all be unified into a single common principle, namely Nonlinear Hebbian Learning. When Nonlinear Hebbian Learning is applied to natural images, receptive field shapes were strongly constrained by the input statistics and preprocessing, but exhibited only modest variation across different choices of nonlinearities in neuron models or synaptic plasticity rules. Neither overcompleteness nor sparse network activity are necessary for the development of localized receptive fields. The analysis of alternative sensory modalities such as auditory models or V2 development lead to the same conclusions. In all examples, receptive fields can be predicted a priori by reformulating an abstract model as nonlinear Hebbian learning. Thus nonlinear Hebbian learning and natural statistics can account for many aspects of receptive field formation across models and sensory modalities

    Towards a Theory of the Laminar Architecture of Cerebral Cortex: Computational Clues from the Visual System

    Full text link
    One of the most exciting and open research frontiers in neuroscience is that of seeking to understand the functional roles of the layers of cerebral cortex. New experimental techniques for probing the laminar circuitry of cortex have recently been developed, opening up novel opportunities for investigating ho1v its six-layered architecture contributes to perception and cognition. The task of trying to interpret this complex structure can be facilitated by theoretical analyses of the types of computations that cortex is carrying out, and of how these might be implemented in specific cortical circuits. We have recently developed a detailed neural model of how the parvocellular stream of the visual cortex utilizes its feedforward, feedback, and horizontal interactions for purposes of visual filtering, attention, and perceptual grouping. This model, called LAMINART, shows how these perceptual processes relate to the mechanisms which ensure stable development of cortical circuits in the infant, and to the continued stability of learning in the adult. The present article reviews this laminar theory of visual cortex, considers how it may be generalized towards a more comprehensive theory that encompasses other cortical areas and cognitive processes, and shows how its laminar framework generates a variety of testable predictions.Defense Advanced Research Projects Agency and the Office of Naval Research (N00014-95-0409); National Science Foundation (IRI 94-01659); Office of Naval Research (N00014-92-1-1309, N00014-95-1-0657
    • …
    corecore