2,478 research outputs found

    The Laminar Organization of Visual Cortex: A Unified View of Development, Learning, and Grouping

    Full text link
    Why are all sensory and cognitive neocortex organized into layered circuits? How do these layers organize circuits that form functional columns in cortical maps? How do bottom-up, top-down, and horizontal interactions within the cortical layers generate adaptive behaviors. This chapter summarizes an evolving neural model which suggests how these interactions help the visual cortex to realize: (1) the binding process whereby cortex groups distributed data into coherent object representations; (2) the attentional process whereby cortex selectively processes important events; and (3) the developmental and learning processes whereby cortex shapes its circuits to match environmental constraints. It is suggested that the mechanisms which achieve property (3) imply properties of (I) and (2). New computational ideas about feedback systems suggest how neocortex develops and learns in a stable way, and why top-down attention requires converging bottom-up inputs to fully activate cortical cells, whereas perceptual groupings do not.Defense Advanced Research Projects Agency and the Office of Naval Research (N00014-95-1-0409); National Science Foundation (IRI-97-20333); Office of Naval Research (N00014-95-1-0657

    A half century of progress towards a unified neural theory of mind and brain with applications to autonomous adaptive agents and mental disorders

    Full text link
    Invited article for the book Artificial Intelligence in the Age of Neural Networks and Brain Computing R. Kozma, C. Alippi, Y. Choe, and F. C. Morabito, Eds. Cambridge, MA: Academic PressThis article surveys some of the main design principles, mechanisms, circuits, and architectures that have been discovered during a half century of systematic research aimed at developing a unified theory that links mind and brain, and shows how psychological functions arise as emergent properties of brain mechanisms. The article describes a theoretical method that has enabled such a theory to be developed in stages by carrying out a kind of conceptual evolution. It also describes revolutionary computational paradigms like Complementary Computing and Laminar Computing that constrain the kind of unified theory that can describe the autonomous adaptive intelligence that emerges from advanced brains. Adaptive Resonance Theory, or ART, is one of the core models that has been discovered in this way. ART proposes how advanced brains learn to attend, recognize, and predict objects and events in a changing world that is filled with unexpected events. ART is not, however, a “theory of everything” if only because, due to Complementary Computing, different matching and learning laws tend to support perception and cognition on the one hand, and spatial representation and action on the other. The article mentions why a theory of this kind may be useful in the design of autonomous adaptive agents in engineering and technology. It also notes how the theory has led to new mechanistic insights about mental disorders such as autism, medial temporal amnesia, Alzheimer’s disease, and schizophrenia, along with mechanistically informed proposals about how their symptoms may be ameliorated

    Visualizing and Understanding Sum-Product Networks

    Full text link
    Sum-Product Networks (SPNs) are recently introduced deep tractable probabilistic models by which several kinds of inference queries can be answered exactly and in a tractable time. Up to now, they have been largely used as black box density estimators, assessed only by comparing their likelihood scores only. In this paper we explore and exploit the inner representations learned by SPNs. We do this with a threefold aim: first we want to get a better understanding of the inner workings of SPNs; secondly, we seek additional ways to evaluate one SPN model and compare it against other probabilistic models, providing diagnostic tools to practitioners; lastly, we want to empirically evaluate how good and meaningful the extracted representations are, as in a classic Representation Learning framework. In order to do so we revise their interpretation as deep neural networks and we propose to exploit several visualization techniques on their node activations and network outputs under different types of inference queries. To investigate these models as feature extractors, we plug some SPNs, learned in a greedy unsupervised fashion on image datasets, in supervised classification learning tasks. We extract several embedding types from node activations by filtering nodes by their type, by their associated feature abstraction level and by their scope. In a thorough empirical comparison we prove them to be competitive against those generated from popular feature extractors as Restricted Boltzmann Machines. Finally, we investigate embeddings generated from random probabilistic marginal queries as means to compare other tractable probabilistic models on a common ground, extending our experiments to Mixtures of Trees.Comment: Machine Learning Journal paper (First Online), 24 page

    Towards a Theory of the Laminar Architecture of Cerebral Cortex: Computational Clues from the Visual System

    Full text link
    One of the most exciting and open research frontiers in neuroscience is that of seeking to understand the functional roles of the layers of cerebral cortex. New experimental techniques for probing the laminar circuitry of cortex have recently been developed, opening up novel opportunities for investigating ho1v its six-layered architecture contributes to perception and cognition. The task of trying to interpret this complex structure can be facilitated by theoretical analyses of the types of computations that cortex is carrying out, and of how these might be implemented in specific cortical circuits. We have recently developed a detailed neural model of how the parvocellular stream of the visual cortex utilizes its feedforward, feedback, and horizontal interactions for purposes of visual filtering, attention, and perceptual grouping. This model, called LAMINART, shows how these perceptual processes relate to the mechanisms which ensure stable development of cortical circuits in the infant, and to the continued stability of learning in the adult. The present article reviews this laminar theory of visual cortex, considers how it may be generalized towards a more comprehensive theory that encompasses other cortical areas and cognitive processes, and shows how its laminar framework generates a variety of testable predictions.Defense Advanced Research Projects Agency and the Office of Naval Research (N00014-95-0409); National Science Foundation (IRI 94-01659); Office of Naval Research (N00014-92-1-1309, N00014-95-1-0657
    corecore