2,707 research outputs found

    The Importance of Forgetting: Limiting Memory Improves Recovery of Topological Characteristics from Neural Data

    Full text link
    We develop of a line of work initiated by Curto and Itskov towards understanding the amount of information contained in the spike trains of hippocampal place cells via topology considerations. Previously, it was established that simply knowing which groups of place cells fire together in an animal's hippocampus is sufficient to extract the global topology of the animal's physical environment. We model a system where collections of place cells group and ungroup according to short-term plasticity rules. In particular, we obtain the surprising result that in experiments with spurious firing, the accuracy of the extracted topological information decreases with the persistence (beyond a certain regime) of the cell groups. This suggests that synaptic transience, or forgetting, is a mechanism by which the brain counteracts the effects of spurious place cell activity

    Neural Computing in Quaternion Algebra

    Get PDF
    兵庫県立大学201

    Unmasking Clever Hans Predictors and Assessing What Machines Really Learn

    Full text link
    Current learning machines have successfully solved hard application problems, reaching high accuracy and displaying seemingly "intelligent" behavior. Here we apply recent techniques for explaining decisions of state-of-the-art learning machines and analyze various tasks from computer vision and arcade games. This showcases a spectrum of problem-solving behaviors ranging from naive and short-sighted, to well-informed and strategic. We observe that standard performance evaluation metrics can be oblivious to distinguishing these diverse problem solving behaviors. Furthermore, we propose our semi-automated Spectral Relevance Analysis that provides a practically effective way of characterizing and validating the behavior of nonlinear learning machines. This helps to assess whether a learned model indeed delivers reliably for the problem that it was conceived for. Furthermore, our work intends to add a voice of caution to the ongoing excitement about machine intelligence and pledges to evaluate and judge some of these recent successes in a more nuanced manner.Comment: Accepted for publication in Nature Communication

    Brain Learning, Attention, and Consciousness

    Full text link
    The processes whereby our brains continue to learn about a changing world in a stable fashion throughout life are proposed to lead to conscious experiences. These processes include the learning of top-down expectations, the matching of these expectations against bottom-up data, the focusing of attention upon the expected clusters of information, and the development of resonant states between bottom-up and top-down processes as they reach an attentive consensus between what is expected and what is there in the outside world. It is suggested that all conscious states in the brain are resonant states, and that these resonant states trigger learning of sensory and cognitive representations. The model which summarize these concepts are therefore called Adaptive Resonance Theory, or ART, models. Psychophysical and neurobiological data in support of ART are presented from early vision, visual object recognition, auditory streaming, variable-rate speech perception, somatosensory perception, and cognitive-emotional interactions, among others. It is noted that ART mechanisms seem to be operative at all levels of the visual system, and it is proposed how these mechanisms are realized by known laminar circuits of visual cortex. It is predicted that the same circuit realization of ART mechanisms will be found in the laminar circuits of all sensory and cognitive neocortex. Concepts and data are summarized concerning how some visual percepts may be visibly, or modally, perceived, whereas amoral percepts may be consciously recognized even though they are perceptually invisible. It is also suggested that sensory and cognitive processing in the What processing stream of the brain obey top-down matching and learning laws that arc often complementary to those used for spatial and motor processing in the brain's Where processing stream. This enables our sensory and cognitive representations to maintain their stability a.s we learn more about the world, while allowing spatial and motor representations to forget learned maps and gains that are no longer appropriate as our bodies develop and grow from infanthood to adulthood. Procedural memories are proposed to be unconscious because the inhibitory matching process that supports these spatial and motor processes cannot lead to resonance.Defense Advance Research Projects Agency; Office of Naval Research (N00014-95-1-0409, N00014-95-1-0657); National Science Foundation (IRI-97-20333

    A statistical mechanics approach to autopoietic immune networks

    Full text link
    The aim of this work is to try to bridge over theoretical immunology and disordered statistical mechanics. Our long term hope is to contribute to the development of a quantitative theoretical immunology from which practical applications may stem. In order to make theoretical immunology appealing to the statistical physicist audience we are going to work out a research article which, from one side, may hopefully act as a benchmark for future improvements and developments, from the other side, it is written in a very pedagogical way both from a theoretical physics viewpoint as well as from the theoretical immunology one. Furthermore, we have chosen to test our model describing a wide range of features of the adaptive immune response in only a paper: this has been necessary in order to emphasize the benefit available when using disordered statistical mechanics as a tool for the investigation. However, as a consequence, each section is not at all exhaustive and would deserve deep investigation: for the sake of completeness, we restricted details in the analysis of each feature with the aim of introducing a self-consistent model.Comment: 22 pages, 14 figur
    corecore