4,589 research outputs found

    Rapid Visual Categorization is not Guided by Early Salience-Based Selection

    Full text link
    The current dominant visual processing paradigm in both human and machine research is the feedforward, layered hierarchy of neural-like processing elements. Within this paradigm, visual saliency is seen by many to have a specific role, namely that of early selection. Early selection is thought to enable very fast visual performance by limiting processing to only the most salient candidate portions of an image. This strategy has led to a plethora of saliency algorithms that have indeed improved processing time efficiency in machine algorithms, which in turn have strengthened the suggestion that human vision also employs a similar early selection strategy. However, at least one set of critical tests of this idea has never been performed with respect to the role of early selection in human vision. How would the best of the current saliency models perform on the stimuli used by experimentalists who first provided evidence for this visual processing paradigm? Would the algorithms really provide correct candidate sub-images to enable fast categorization on those same images? Do humans really need this early selection for their impressive performance? Here, we report on a new series of tests of these questions whose results suggest that it is quite unlikely that such an early selection process has any role in human rapid visual categorization.Comment: 22 pages, 9 figure

    Intelligent systems in the context of surrounding environment

    Get PDF
    We investigate the behavioral patterns of a population of agents, each controlled by a simple biologically motivated neural network model, when they are set in competition against each other in the Minority Model of Challet and Zhang. We explore the effects of changing agent characteristics, demonstrating that crowding behavior takes place among agents of similar memory, and show how this allows unique `rogue' agents with higher memory values to take advantage of a majority population. We also show that agents' analytic capability is largely determined by the size of the intermediary layer of neurons. In the context of these results, we discuss the general nature of natural and artificial intelligence systems, and suggest intelligence only exists in the context of the surrounding environment (embodiment). Source code for the programs used can be found at http://neuro.webdrake.net/

    Bidirectional Learning in Recurrent Neural Networks Using Equilibrium Propagation

    Get PDF
    Neurobiologically-plausible learning algorithms for recurrent neural networks that can perform supervised learning are a neglected area of study. Equilibrium propagation is a recent synthesis of several ideas in biological and artificial neural network research that uses a continuous-time, energy-based neural model with a local learning rule. However, despite dealing with recurrent networks, equilibrium propagation has only been applied to discriminative categorization tasks. This thesis generalizes equilibrium propagation to bidirectional learning with asymmetric weights. Simultaneously learning the discriminative as well as generative transformations for a set of data points and their corresponding category labels, bidirectional equilibrium propagation utilizes recurrence and weight asymmetry to share related but non-identical representations within the network. Experiments on an artificial dataset demonstrate the ability to learn both transformations, as well as the ability for asymmetric-weight networks to generalize their discriminative training to the untrained generative task

    Precision and neuronal dynamics in the human posterior parietal cortex during evidence accumulation

    Get PDF
    Primate studies show slow ramping activity in posterior parietal cortex (PPC) neurons during perceptual decision-making. These findings have inspired a rich theoretical literature to account for this activity. These accounts are largely unrelated to Bayesian theories of perception and predictive coding, a related formulation of perceptual inference in the cortical hierarchy. Here, we tested a key prediction of such hierarchical inference, namely that the estimated precision (reliability) of information ascending the cortical hierarchy plays a key role in determining both the speed of decision-making and the rate of increase of PPC activity. Using dynamic causal modelling of magnetoencephalographic (MEG) evoked responses, recorded during a simple perceptual decision-making task, we recover ramping-activity from an anatomically and functionally plausible network of regions, including early visual cortex, the middle temporal area (MT) and PPC. Precision, as reflected by the gain on pyramidal cell activity, was strongly correlated with both the speed of decision making and the slope of PPC ramping activity. Our findings indicate that the dynamics of neuronal activity in the human PPC during perceptual decision-making recapitulate those observed in the macaque, and in so doing we link observations from primate electrophysiology and human choice behaviour. Moreover, the synaptic gain control modulating these dynamics is consistent with predictive coding formulations of evidence accumulation

    Predictive Coding Can Do Exact Backpropagation on Any Neural Network

    Full text link
    Intersecting neuroscience and deep learning has brought benefits and developments to both fields for several decades, which help to both understand how learning works in the brain, and to achieve the state-of-the-art performances in different AI benchmarks. Backpropagation (BP) is the most widely adopted method for the training of artificial neural networks, which, however, is often criticized for its biological implausibility (e.g., lack of local update rules for the parameters). Therefore, biologically plausible learning methods (e.g., inference learning (IL)) that rely on predictive coding (a framework for describing information processing in the brain) are increasingly studied. Recent works prove that IL can approximate BP up to a certain margin on multilayer perceptrons (MLPs), and asymptotically on any other complex model, and that zero-divergence inference learning (Z-IL), a variant of IL, is able to exactly implement BP on MLPs. However, the recent literature shows also that there is no biologically plausible method yet that can exactly replicate the weight update of BP on complex models. To fill this gap, in this paper, we generalize (IL and) Z-IL by directly defining them on computational graphs. To our knowledge, this is the first biologically plausible algorithm that is shown to be equivalent to BP in the way of updating parameters on any neural network, and it is thus a great breakthrough for the interdisciplinary research of neuroscience and deep learning.Comment: 15 pages, 9 figure
    • …
    corecore