418 research outputs found

    Reconciling Predictive Coding and Biased Competition Models of Cortical Function

    Get PDF
    A simple variation of the standard biased competition model is shown, via some trivial mathematical manipulations, to be identical to predictive coding. Specifically, it is shown that a particular implementation of the biased competition model, in which nodes compete via inhibition that targets the inputs to a cortical region, is mathematically equivalent to the linear predictive coding model. This observation demonstrates that these two important and influential rival theories of cortical function are minor variations on the same underlying mathematical model

    Learning viewpoint invariant perceptual representations from cluttered images

    Get PDF
    In order to perform object recognition, it is necessary to form perceptual representations that are sufficiently specific to distinguish between objects, but that are also sufficiently flexible to generalize across changes in location, rotation, and scale. A standard method for learning perceptual representations that are invariant to viewpoint is to form temporal associations across image sequences showing object transformations. However, this method requires that individual stimuli be presented in isolation and is therefore unlikely to succeed in real-world applications where multiple objects can co-occur in the visual input. This paper proposes a simple modification to the learning method that can overcome this limitation and results in more robust learning of invariant representations

    Cortical region interactions and the functional role of apical dendrites

    Get PDF
    The basal and distal apical dendrites of pyramidal cells occupy distinct cortical layers and are targeted by axons originating in different cortical regions. Hence, apical and basal dendrites receive information from distinct sources. Physiological evidence suggests that this anatomically observed segregation of input sources may have functional significance. This possibility has been explored in various connectionist models that employ neurons with functionally distinct apical and basal compartments. A neuron in which separate sets of inputs can be integrated independently has the potential to operate in a variety of ways which are not possible for the conventional model of a neuron in which all inputs are treated equally. This article thus considers how functionally distinct apical and basal dendrites can contribute to the information processing capacities of single neurons and, in particular, how information from different cortical regions could have disparate affects on neural activity and learning

    Predictive Coding as a Model of Biased Competition in Visual Attention

    Get PDF
    Attention acts, through cortical feedback pathways, to enhance the response of cells encoding expected or predicted information. Such observations are inconsistent with the predictive coding theory of cortical function which proposes that feedback acts to suppress information predicted by higher-level cortical regions. Despite this discrepancy, this article demonstrates that the predictive coding model can be used to simulate a number of the effects of attention. This is achieved via a simple mathematical rearrangement of the predictive coding model, which allows it to be interpreted as a form of biased competition model. Nonlinear extensions to the model are proposed that enable it to explain a wider range of data

    Learning image components for object recognition

    Get PDF
    In order to perform object recognition it is necessary to learn representations of the underlying components of images. Such components correspond to objects, object-parts, or features. Non-negative matrix factorisation is a generative model that has been specifically proposed for finding such meaningful representations of image data, through the use of non-negativity constraints on the factors. This article reports on an empirical investigation of the performance of non-negative matrix factorisation algorithms. It is found that such algorithms need to impose additional constraints on the sparseness of the factors in order to successfully deal with occlusion. However, these constraints can themselves result in these algorithms failing to identify image components under certain conditions. In contrast, a recognition model (a competitive learning neural network algorithm) reliably and accurately learns representations of elementary image features without such constraints

    A feedback model of visual attention

    Get PDF
    Feedback connections are a prominent feature of cortical anatomy and are likely to have significant functional role in neural information processing. We present a neural network model of cortical feedback that successfully simulates neurophysiological data associated with attention. In this domain our model can be considered a more detailed, and biologically plausible, implementation of the biased competition model of attention. However, our model is more general as it can also explain a variety of other top-down processes in vision, such as figure/ground segmentation and contextual cueing. This model thus suggests that a common mechanism, involving cortical feedback pathways, is responsible for a range of phenomena and provides a unified account of currently disparate areas of research

    Exploring the functional significance of dendritic inhibition in cortical pyramidal cells

    Get PDF
    Inhibitory synapses contacting the soma and axon initial segment are commonly presumed to participate in shaping the response properties of cortical pyramidal cells. Such an inhibitory mechanism has been explored in numerous computational models. However, the majority of inhibitory synapses target the dendrites of pyramidal cells, and recent physiological data suggests that this dendritic inhibition affects tuning properties. We describe a model that can be used to investigate the role of dendritic inhibition in the competition between neurons. With this model we demonstrate that dendritic inhibition significantly enhances the computational and representational properties of neural networks

    Pre-integration lateral inhibition enhances unsupervised learning

    Get PDF
    A large and influential class of neural network architectures use post-integration lateral inhibition as a mechanism for competition. We argue that these algorithms are computationally deficient in that they fail to generate, or learn, appropriate perceptual representations under certain circumstances. An alternative neural network architecture is presented in which nodes compete for the right to receive inputs rather than for the right to generate outputs. This form of competition, implemented through pre-integration lateral inhibition, does provide appropriate coding properties and can be used to efficiently learn such representations. Furthermore, this architecture is consistent with both neuro-anatomical and neuro-physiological data. We thus argue that pre-integration lateral inhibition has computational advantages over conventional neural network architectures while remaining equally biologically plausible

    A feedback model of perceptual learning and categorisation

    Get PDF
    Top-down, feedback, influences are known to have significant effects on visual information processing. Such influences are also likely to affect perceptual learning. This article employs a computational model of the cortical region interactions underlying visual perception to investigate possible influences of top-down information on learning. The results suggest that feedback could bias the way in which perceptual stimuli are categorised and could also facilitate the learning of sub-ordinate level representations suitable for object identification and perceptual expertise

    Dendritic inhibition enhances neural coding properties.

    Get PDF
    The presence of a large number of inhibitory contacts at the soma and axon initial segment of cortical pyramidal cells has inspired a large and influential class of neural network model which use post-integration lateral inhibition as a mechanism for competition between nodes. However, inhibitory synapses also target the dendrites of pyramidal cells. The role of this dendritic inhibition in competition between neurons has not previously been addressed. We demonstrate, using a simple computational model, that such pre-integration lateral inhibition provides networks of neurons with useful representational and computational properties which are not provided by post-integration inhibition
    corecore