2,551 research outputs found

    From neural PCA to deep unsupervised learning

    Full text link
    A network supporting deep unsupervised learning is presented. The network is an autoencoder with lateral shortcut connections from the encoder to decoder at each level of the hierarchy. The lateral shortcut connections allow the higher levels of the hierarchy to focus on abstract invariant features. While standard autoencoders are analogous to latent variable models with a single layer of stochastic variables, the proposed network is analogous to hierarchical latent variables models. Learning combines denoising autoencoder and denoising sources separation frameworks. Each layer of the network contributes to the cost function a term which measures the distance of the representations produced by the encoder and the decoder. Since training signals originate from all levels of the network, all layers can learn efficiently even in deep networks. The speedup offered by cost terms from higher levels of the hierarchy and the ability to learn invariant features are demonstrated in experiments.Comment: A revised version of an article that has been accepted for publication in Advances in Independent Component Analysis and Learning Machines (2015), edited by Ella Bingham, Samuel Kaski, Jorma Laaksonen and Jouko Lampine

    Behaviourally meaningful representations from normalisation and context-guided denoising

    Get PDF
    Many existing independent component analysis algorithms include a preprocessing stage where the inputs are sphered. This amounts to normalising the data such that all correlations between the variables are removed. In this work, I show that sphering allows very weak contextual modulation to steer the development of meaningful features. Context-biased competition has been proposed as a model of covert attention and I propose that sphering-like normalisation also allows weaker top-down bias to guide attention

    Neural coding strategies and mechanisms of competition

    Get PDF
    A long running debate has concerned the question of whether neural representations are encoded using a distributed or a local coding scheme. In both schemes individual neurons respond to certain specific patterns of pre-synaptic activity. Hence, rather than being dichotomous, both coding schemes are based on the same representational mechanism. We argue that a population of neurons needs to be capable of learning both local and distributed representations, as appropriate to the task, and should be capable of generating both local and distributed codes in response to different stimuli. Many neural network algorithms, which are often employed as models of cognitive processes, fail to meet all these requirements. In contrast, we present a neural network architecture which enables a single algorithm to efficiently learn, and respond using, both types of coding scheme

    Cortex, countercurrent context, and dimensional integration of lifetime memory

    Get PDF
    The correlation between relative neocortex size and longevity in mammals encourages a search for a cortical function specifically related to the life-span. A candidate in the domain of permanent and cumulative memory storage is proposed and explored in relation to basic aspects of cortical organization. The pattern of cortico-cortical connectivity between functionally specialized areas and the laminar organization of that connectivity converges on a globally coherent representational space in which contextual embedding of information emerges as an obligatory feature of cortical function. This brings a powerful mode of inductive knowledge within reach of mammalian adaptations, a mode which combines item specificity with classificatory generality. Its neural implementation is proposed to depend on an obligatory interaction between the oppositely directed feedforward and feedback currents of cortical activity, in countercurrent fashion. Direct interaction of the two streams along their cortex-wide local interface supports a scheme of "contextual capture" for information storage responsible for the lifelong cumulative growth of a uniquely cortical form of memory termed "personal history." This approach to cortical function helps elucidate key features of cortical organization as well as cognitive aspects of mammalian life history strategies

    A feedback model of perceptual learning and categorisation

    Get PDF
    Top-down, feedback, influences are known to have significant effects on visual information processing. Such influences are also likely to affect perceptual learning. This article employs a computational model of the cortical region interactions underlying visual perception to investigate possible influences of top-down information on learning. The results suggest that feedback could bias the way in which perceptual stimuli are categorised and could also facilitate the learning of sub-ordinate level representations suitable for object identification and perceptual expertise

    Review: Object vision in a structured world

    Get PDF
    In natural vision, objects appear at typical locations, both with respect to visual space (e.g., an airplane in the upper part of a scene) and other objects (e.g., a lamp above a table). Recent studies have shown that object vision is strongly adapted to such positional regularities. In this review we synthesize these developments, highlighting that adaptations to positional regularities facilitate object detection and recognition, and sharpen the representations of objects in visual cortex. These effects are pervasive across various types of high-level content. We posit that adaptations to real-world structure collectively support optimal usage of limited cortical processing resources. Taking positional regularities into account will thus be essential for understanding efficient object vision in the real world
    corecore