35,184 research outputs found

    Adaptive Resonance: An Emerging Neural Theory of Cognition

    Full text link
    Adaptive resonance is a theory of cognitive information processing which has been realized as a family of neural network models. In recent years, these models have evolved to incorporate new capabilities in the cognitive, neural, computational, and technological domains. Minimal models provide a conceptual framework, for formulating questions about the nature of cognition; an architectural framework, for mapping cognitive functions to cortical regions; a semantic framework, for precisely defining terms; and a computational framework, for testing hypotheses. These systems are here exemplified by the distributed ART (dART) model, which generalizes localist ART systems to allow arbitrarily distributed code representations, while retaining basic capabilities such as stable fast learning and scalability. Since each component is placed in the context of a unified real-time system, analysis can move from the level of neural processes, including learning laws and rules of synaptic transmission, to cognitive processes, including attention and consciousness. Local design is driven by global functional constraints, with each network synthesizing a dynamic balance of opposing tendencies. The self-contained working ART and dART models can also be transferred to technology, in areas that include remote sensing, sensor fusion, and content-addressable information retrieval from large databases.Office of Naval Research and the defense Advanced Research Projects Agency (N00014-95-1-0409, N00014-1-95-0657); National Institutes of Health (20-316-4304-5

    Specialization within cortical models : An application to causality learning

    Get PDF
    Colloque avec actes et comité de lecture.In this paper we present the principle of learning by specialization within a cortically-inspired framework. Specialization of neurons in the cortex has been observed, and many models are using such "cortical-like" learning mechanisms, adapted for computational efficiency. Adaptations will be discussed, in light of experiments with our cortical model addressing causality learning from perceptive sequences

    Synaptic state matching: a dynamical architecture for predictive internal representation and feature perception

    Get PDF
    Here we consider the possibility that a fundamental function of sensory cortex is the generation of an internal simulation of sensory environment in real-time. A logical elaboration of this idea leads to a dynamical neural architecture that oscillates between two fundamental network states, one driven by external input, and the other by recurrent synaptic drive in the absence of sensory input. Synaptic strength is modified by a proposed synaptic state matching (SSM) process that ensures equivalence of spike statistics between the two network states. Remarkably, SSM, operating locally at individual synapses, generates accurate and stable network-level predictive internal representations, enabling pattern completion and unsupervised feature detection from noisy sensory input. SSM is a biologically plausible substrate for learning and memory because it brings together sequence learning, feature detection, synaptic homeostasis, and network oscillations under a single parsimonious computational framework. Beyond its utility as a potential model of cortical computation, artificial networks based on this principle have remarkable capacity for internalizing dynamical systems, making them useful in a variety of application domains including time-series prediction and machine intelligence

    Computational Principles of Multiple-Task Learning in Humans and Artificial Neural Networks

    Get PDF
    While humans can learn to perform many specific and highly specialized behaviors,perhaps what is most unique about human cognitive capabilities is their capacity to generalize, to share information across contexts and adapt to the myriad problems that can arise in complex environments. While it is possible to imagine agents who learn to deal with each challenge they experience separately, humans instead integrate new situations into the framework of the tasks they have experienced in their life, allowing them to reuse insight and strategies across them. Yet the precise forms of shared representations across tasks, as well as computational principles for how sharing of insight over learning multiple tasks may impact behavior, remain uncertain. The significant complexity in the problem of cognition capable of generalizing across tasks has been both an inspiration and a significant impediment to building useful and insightful models. The increasing utilization of artificial neural networks (ANN) as a model for cortical computation provides a potent opportunity to identify mechanisms and principles underlying multiple-task learning and performance in the brain. In this work we use ANNs in conjunction with human behavior to explore how a single agent may utilize information across multiple tasks to create high performing and general representations. First, we present a flexible framework to facilitate training recurrent neural networks (RNN), increasing the ease of training models on tasks of interest. Second, we explore how an ANN model can build shared representations to facilitate performance on a wide variety of delay task problems, as well as how such a joint representation can explain observed phenomena identified in the firing rates of prefrontal cortical neurons. Third, we analyze human multiple-task learning in two tasks and use ANNs to provide insight into how the structure of representations can give rise to the specific learning patterns and generalization strategies observed in humans. Overall, we provide computational insight into mechanisms of multiple-task learning and generalization as well as use those findings in conjunction with observed human behavior to constrain possible computational mechanisms employed in cortical circuits

    Precis of neuroconstructivism: how the brain constructs cognition

    Get PDF
    Neuroconstructivism: How the Brain Constructs Cognition proposes a unifying framework for the study of cognitive development that brings together (1) constructivism (which views development as the progressive elaboration of increasingly complex structures), (2) cognitive neuroscience (which aims to understand the neural mechanisms underlying behavior), and (3) computational modeling (which proposes formal and explicit specifications of information processing). The guiding principle of our approach is context dependence, within and (in contrast to Marr [1982]) between levels of organization. We propose that three mechanisms guide the emergence of representations: competition, cooperation, and chronotopy; which themselves allow for two central processes: proactivity and progressive specialization. We suggest that the main outcome of development is partial representations, distributed across distinct functional circuits. This framework is derived by examining development at the level of single neurons, brain systems, and whole organisms. We use the terms encellment, embrainment, and embodiment to describe the higher-level contextual influences that act at each of these levels of organization. To illustrate these mechanisms in operation we provide case studies in early visual perception, infant habituation, phonological development, and object representations in infancy. Three further case studies are concerned with interactions between levels of explanation: social development, atypical development and within that, developmental dyslexia. We conclude that cognitive development arises from a dynamic, contextual change in embodied neural structures leading to partial representations across multiple brain regions and timescales, in response to proactively specified physical and social environment

    To Learn or Not to Learn Features for Deformable Registration?

    Full text link
    Feature-based registration has been popular with a variety of features ranging from voxel intensity to Self-Similarity Context (SSC). In this paper, we examine the question on how features learnt using various Deep Learning (DL) frameworks can be used for deformable registration and whether this feature learning is necessary or not. We investigate the use of features learned by different DL methods in the current state-of-the-art discrete registration framework and analyze its performance on 2 publicly available datasets. We draw insights into the type of DL framework useful for feature learning and the impact, if any, of the complexity of different DL models and brain parcellation methods on the performance of discrete registration. Our results indicate that the registration performance with DL features and SSC are comparable and stable across datasets whereas this does not hold for low level features.Comment: 9 pages, 4 figure
    corecore