290 research outputs found

    When and where do feed-forward neural networks learn localist representations?

    Full text link
    According to parallel distributed processing (PDP) theory in psychology, neural networks (NN) learn distributed rather than interpretable localist representations. This view has been held so strongly that few researchers have analysed single units to determine if this assumption is correct. However, recent results from psychology, neuroscience and computer science have shown the occasional existence of local codes emerging in artificial and biological neural networks. In this paper, we undertake the first systematic survey of when local codes emerge in a feed-forward neural network, using generated input and output data with known qualities. We find that the number of local codes that emerge from a NN follows a well-defined distribution across the number of hidden layer neurons, with a peak determined by the size of input data, number of examples presented and the sparsity of input data. Using a 1-hot output code drastically decreases the number of local codes on the hidden layer. The number of emergent local codes increases with the percentage of dropout applied to the hidden layer, suggesting that the localist encoding may offer a resilience to noisy networks. This data suggests that localist coding can emerge from feed-forward PDP networks and suggests some of the conditions that may lead to interpretable localist representations in the cortex. The findings highlight how local codes should not be dismissed out of hand

    Priorless Recurrent Networks Learn Curiously

    Get PDF

    A promising new tool for literacy instruction: The morphological matrix

    Get PDF
    There is growing interest in the role that morphological knowledge plays in literacy acquisition, but there is no research directly comparing the efficacy of different forms of morphological instruction. Here we compare two methods of teaching English morphology in the context of a memory experiment when words were organized by affix during study (e.g., a list of words was presented that all share an affix, such as , , , , etc.) or by base during study (e.g., a list of words was presented that all share a base, such as , , , ). We show that memory for morphologically complex words is better in both conditions compared to a control condition that does not highlight the morphological composition of words, and most importantly, show that studying words in a base-centric format improves memory further still. We argue that the morphological matrix that organizes words around a common base may provide an important new tool for literacy instruction

    Harnessing the Symmetry of Convolutions for Systematic Generalisation

    Get PDF

    Implicit memory and test awareness.

    Get PDF

    Successes and critical failures of neural networks in capturing human-like speech recognition

    Full text link
    Natural and artificial audition can in principle evolve different solutions to a given problem. The constraints of the task, however, can nudge the cognitive science and engineering of audition to qualitatively converge, suggesting that a closer mutual examination would improve artificial hearing systems and process models of the mind and brain. Speech recognition - an area ripe for such exploration - is inherently robust in humans to a number transformations at various spectrotemporal granularities. To what extent are these robustness profiles accounted for by high-performing neural network systems? We bring together experiments in speech recognition under a single synthesis framework to evaluate state-of-the-art neural networks as stimulus-computable, optimized observers. In a series of experiments, we (1) clarify how influential speech manipulations in the literature relate to each other and to natural speech, (2) show the granularities at which machines exhibit out-of-distribution robustness, reproducing classical perceptual phenomena in humans, (3) identify the specific conditions where model predictions of human performance differ, and (4) demonstrate a crucial failure of all artificial systems to perceptually recover where humans do, suggesting a key specification for theory and model building. These findings encourage a tighter synergy between the cognitive science and engineering of audition

    The Practical and Principled Problems with Educational Neuroscience

    Get PDF

    The practical and principled problems with educational neuroscience.

    Full text link
    • …
    corecore