3 research outputs found

    The role of architectural and learning constraints in neural network models: A case study on visual space coding

    Get PDF
    The recent “deep learning revolution” in artificial neural networks had strong impact and widespread deployment for engineering applications, but the use of deep learning for neurocomputational modeling has been so far limited. In this article we argue that unsupervised deep learning represents an important step forward for improving neurocomputational models of perception and cognition, because it emphasizes the role of generative learning as opposed to discriminative (supervised) learning. As a case study, we present a series of simulations investigating the emergence of neural coding of visual space for sensorimotor transformations. We compare different network architectures commonly used as building blocks for unsupervised deep learning by systematically testing the type of receptive fields and gain modulation developed by the hidden neurons. In particular, we compare Restricted Boltzmann Machines (RBMs), which are stochastic, generative networks with bidirectional connections trained using contrastive divergence, with autoencoders, which are deterministic networks trained using error backpropagation. For both learning architectures we also explore the role of sparse coding, which has been identified as a fundamental principle of neural computation. The unsupervised models are then compared with supervised, feed-forward networks that learn an explicit mapping between different spatial reference frames. Our simulations show that both architectural and learning constraints strongly influenced the emergent coding of visual space in terms of distribution of tuning functions at the level of single neurons. Unsupervised models, and particularly RBMs, were found to more closely adhere to neurophysiological data from single-cell recordings in the primate parietal cortex. These results provide new insights into how basic properties of artificial neural networks might be relevant for modeling neural information processing in biological systems

    Space coding for sensorimotor transformations can emerge through unsupervised learning

    No full text
    The posterior parietal cortex (PPC) is fundamental for sensorimotor transformations because it combines multiple sensory inputs and posture signals into different spatial reference frames that drive motor programming. Here, we present a computational model mimicking the sensorimotor transformations occurring in the PPC. A recurrent neural network with one layer of hidden neurons (restricted Boltzmann machine) learned a stochastic generative model of the sensory data without supervision. After the unsupervised learning phase, the activity of the hidden neurons was used to compute a motor program (a population code on a bidimensional map) through a simple linear projection and delta rule learning. The average motor error, calculated as the difference between the expected and the computed output, was less than 3\ub0. Importantly, analyses of the hidden neurons revealed gain-modulated visual receptive fields, thereby showing that space coding for sensorimotor transformations similar to that observed in the PPC can emerge through unsupervised learning. These results suggest that gain modulation is an efficient coding strategy to integrate visual and postural information toward the generation of motor commands

    Mechanisms of top-down visual spatial attention: computational and behavioral investigations

    Get PDF
    This thesis examines the mechanisms underlying visual spatial attention. In particular I focused on top-­‐down or voluntary attention, namely the ability to select relevant information and discard the irrelevant according to our goals. Given the limited processing resources of the human brain, which does not allow to process all the available information to the same degree, the ability to correctly allocate processing resources is fundamental for the accomplishment of most everyday tasks. The cost of misoriented attention is that we could miss some relevant information, with potentially serious consequences. In the first study (chapter 2) I will address the issue of the neural substrates of visual spatial attention: what are the neural mechanisms that allow the deployment of visual spatial attention? According to the premotor theory orienting attention to a location in space is equivalent to planning an eye movement to the same location, an idea strongly supported by neuroimaging and neurophysiological evidence. Accordingly, in this study I will present a model that can account for several attentional effects without requiring additional mechanisms separate from the circuits that perform sensorimotor transformations for eye movements. Moreover, it includes a mechanism that allows, within the framework of the premotor theory, to explain dissociations between attention and eye movements that may be invoked to disprove it. In the second model presented (chapter 3) I will further investigate the computational mechanisms underlying sensorimotor transformations. Specifically I will show that a representation in which the amplitude of visual responses is modulated by postural signal is both efficient and plausible, emerging also in a neural network model trained through unsupervised learning (i.e., using only signals locally available at the neuron level). Ultimately this result gives additional support to the approach adopted in the first model. Next, I will present a series of behavioral studies: in the first (chapter 4) I will show that spatial constancy of attention (i.e., the ability to sustain attention at a spatial location across eye movements) is dependent on some properties of the image, namely the presence of continuous visual landmarks at the attended locations. Importantly, this finding helps resolve contrasts between several recent results. In the second behavioral study (chapter 5), I will investigate an often neglected aspect of spatial cueing paradigms, probably the most widely used technique in studies of covert attention: the role of cue predictivity (i.e. the extent to which the spatial cue correctly indicates the location where the target stimulus will appear). Results show that, independently of participant’s awareness, changes  in predictivity result in changes in spatial validity effects, and that reliable shifts of attention can take place also in the absence of a predictive cue. In sum the results question the appropriateness of using predictive cues for delineating pure voluntary shifts of spatial attention. Finally, in the last study I will use a psychophysiological measure, the diameter of the eye’s pupil, to investigate intensive aspects of attention. Event-­‐related pupil dilations accurately mirrored changes in visuospatial awareness induced by a dual-­‐task manipulation that consumed attentional resources. Moreover, results of the primary spatial monitoring task revealed a significant rightward bias, indicated by a greater proportion of missed targets in the left hemifield. Interestingly this result mimics the extinction to double simultaneous stimulation (i.e., the failure to respond to a stimulus when it is presented simultaneously with another stimulus) which is often found in patients with unilateral brain damage. Overall, these studies present an emerging picture of attention as a complex mechanism that even in its volitional aspects is modulated by other non-­‐volitional factors, both external and internal to the individua
    corecore