880 research outputs found

    How Laminar Frontal Cortex and Basal Ganglia Circuits Interact to Control Planned and Reactive Saccades

    Full text link
    The basal ganglia and frontal cortex together allow animals to learn adaptive responses that acquire rewards when prepotent reflexive responses are insufficient. Anatomical studies show a rich pattern of interactions between the basal ganglia and distinct frontal cortical layers. Analysis of the laminar circuitry of the frontal cortex, together with its interactions with the basal ganglia, motor thalamus, superior colliculus, and inferotemporal and parietal cortices, provides new insight into how these brain regions interact to learn and perform complexly conditioned behaviors. A neural model whose cortical component represents the frontal eye fields captures these interacting circuits. Simulations of the neural model illustrate how it provides a functional explanation of the dynamics of 17 physiologically identified cell types found in these areas. The model predicts how action planning or priming (in cortical layers III and VI) is dissociated from execution (in layer V), how a cue may serve either as a movement target or as a discriminative cue to move elsewhere, and how the basal ganglia help choose among competing actions. The model simulates neurophysiological, anatomical, and behavioral data about how monkeys perform saccadic eye movement tasks, including fixation; single saccade, overlap, gap, and memory-guided saccades; anti-saccades; and parallel search among distractors.Defense Advanced Research Projects Agency and the Office of Naval Research (N00014-95-l-0409, N00014-92-J-1309, N00014-95-1-0657); National Science Foundation (IRI-97-20333)

    A feedback model of perceptual learning and categorisation

    Get PDF
    Top-down, feedback, influences are known to have significant effects on visual information processing. Such influences are also likely to affect perceptual learning. This article employs a computational model of the cortical region interactions underlying visual perception to investigate possible influences of top-down information on learning. The results suggest that feedback could bias the way in which perceptual stimuli are categorised and could also facilitate the learning of sub-ordinate level representations suitable for object identification and perceptual expertise

    Learning viewpoint invariant perceptual representations from cluttered images

    Get PDF
    In order to perform object recognition, it is necessary to form perceptual representations that are sufficiently specific to distinguish between objects, but that are also sufficiently flexible to generalize across changes in location, rotation, and scale. A standard method for learning perceptual representations that are invariant to viewpoint is to form temporal associations across image sequences showing object transformations. However, this method requires that individual stimuli be presented in isolation and is therefore unlikely to succeed in real-world applications where multiple objects can co-occur in the visual input. This paper proposes a simple modification to the learning method that can overcome this limitation and results in more robust learning of invariant representations

    Learning complex cell invariance from natural videos: A plausibility proof

    Get PDF
    One of the most striking feature of the cortex is its ability to wire itself. Understanding how the visual cortex wires up through development and how visual experience refines connections into adulthood is a key question for Neuroscience. While computational models of the visual cortex are becoming increasingly detailed, the question of how such architecture could self-organize through visual experience is often overlooked. Here we focus on the class of hierarchical feedforward models of the ventral stream of the visual cortex, which extend the classical simple-to-complex cells model by Hubel and Wiesel (1962) to extra-striate areas, and have been shown to account for a host of experimental data. Such models assume two functional classes of simple and complex cells with specific predictions about their respective wiring and resulting functionalities.In these networks, the issue of learning, especially for complex cells, is perhaps the least well understood. In fact, in most of these models, the connectivity between simple and complex cells is not learned butrather hard-wired. Several algorithms have been proposed for learning invariances at the complex cell level based on a trace rule to exploit the temporal continuity of sequences of natural images, but very few can learn from natural cluttered image sequences.Here we propose a new variant of the trace rule that only reinforces the synapses between the most active cells, and therefore can handle cluttered environments. The algorithm has so far been developed and tested at the level of V1-like simple and complex cells: we verified that Gabor-like simple cell selectivity could emerge from competitive Hebbian learning. In addition, we show how the modified trace rule allows the subsequent complex cells to learn to selectively pool over simple cells with the same preferred orientation but slightly different positions thus increasing their tolerance to the precise position of the stimulus within their receptive fields

    Artificial ontogenesis: a connectionist model of development

    Get PDF
    This thesis suggests that ontogenetic adaptive processes are important for generating intelligent beha- viour. It is thus proposed that such processes, as they occur in nature, need to be modelled and that such a model could be used for generating artificial intelligence, and specifically robotic intelligence. Hence, this thesis focuses on how mechanisms of intelligence are specified.A major problem in robotics is the need to predefine the behaviour to be followed by the robot. This makes design intractable for all but the simplest tasks and results in controllers that are specific to that particular task and are brittle when faced with unforeseen circumstances. These problems can be resolved by providing the robot with the ability to adapt the rules it follows and to autonomously create new rules for controlling behaviour. This solution thus depends on the predefinition of how rules to control behaviour are to be learnt rather than the predefinition of rules for behaviour themselves.Learning new rules for behaviour occurs during the developmental process in biology. Changes in the structure of the cerebral 'cortex underly behavioural and cognitive development throughout infancy and beyond. The uniformity of the neocortex suggests that there is significant computational uniformity across the cortex resulting from uniform mechanisms of development, and holds out the possibility of a general model of development. Development is an interactive process between genetic predefinition and environmental influences. This interactive process is constructive: qualitatively new behaviours are learnt by using simple abilities as a basis for learning more complex ones. The progressive increase in competence, provided by development, may be essential to make tractable the process of acquiring higher -level abilities.While simple behaviours can be triggered by direct sensory cues, more complex behaviours require the use of more abstract representations. There is thus a need to find representations at the correct level of abstraction appropriate to controlling each ability. In addition, finding the correct level of abstrac- tion makes tractable the task of associating sensory representations with motor actions. Hence, finding appropriate representations is important both for learning behaviours and for controlling behaviours. Representations can be found by recording regularities in the world or by discovering re- occurring pat- terns through repeated sensory -motor interactions. By recording regularities within the representations thus formed, more abstract representations can be found. Simple, non -abstract, representations thus provide the basis for learning more complex, abstract, representations.A modular neural network architecture is presented as a basis for a model of development. The pat- tern of activity of the neurons in an individual network constitutes a representation of the input to that network. This representation is formed through a novel, unsupervised, learning algorithm which adjusts the synaptic weights to improve the representation of the input data. Representations are formed by neurons learning to respond to correlated sets of inputs. Neurons thus became feature detectors or pat- tern recognisers. Because the nodes respond to patterns of inputs they encode more abstract features of the input than are explicitly encoded in the input data itself. In this way simple representations provide the basis for learning more complex representations. The algorithm allows both more abstract represent- ations to be formed by associating correlated, coincident, features together, and invariant representations to be formed by associating correlated, sequential, features together.The algorithm robustly learns accurate and stable representations, in a format most appropriate to the structure of the input data received: it can represent both single and multiple input features in both the discrete and continuous domains, using either topologically or non -topologically organised nodes. The output of one neural network is used to provide inputs for other networks. The robustness of the algorithm enables each neural network to be implemented using an identical algorithm. This allows a modular `assembly' of neural networks to be used for learning more complex abilities: the output activations of a network can be used as the input to other networks which can then find representations of more abstract information within the same input data; and, by defining the output activations of neurons in certain networks to have behavioural consequences it is possible to learn sensory -motor associations, to enable sensory representations to be used to control behaviour

    Self-Organization of Spiking Neural Networks for Visual Object Recognition

    Get PDF
    On one hand, the visual system has the ability to differentiate between very similar objects. On the other hand, we can also recognize the same object in images that vary drastically, due to different viewing angle, distance, or illumination. The ability to recognize the same object under different viewing conditions is called invariant object recognition. Such object recognition capabilities are not immediately available after birth, but are acquired through learning by experience in the visual world. In many viewing situations different views of the same object are seen in a tem- poral sequence, e.g. when we are moving an object in our hands while watching it. This creates temporal correlations between successive retinal projections that can be used to associate different views of the same object. Theorists have therefore pro- posed a synaptic plasticity rule with a built-in memory trace (trace rule). In this dissertation I present spiking neural network models that offer possible explanations for learning of invariant object representations. These models are based on the following hypotheses: 1. Instead of a synaptic trace rule, persistent firing of recurrently connected groups of neurons can serve as a memory trace for invariance learning. 2. Short-range excitatory lateral connections enable learning of self-organizing topographic maps that represent temporal as well as spatial correlations. 3. When trained with sequences of object views, such a network can learn repre- sentations that enable invariant object recognition by clustering different views of the same object within a local neighborhood. 4. Learning of representations for very similar stimuli can be enabled by adaptive inhibitory feedback connections. The study presented in chapter 3.1 details an implementation of a spiking neural network to test the first three hypotheses. This network was tested with stimulus sets that were designed in two feature dimensions to separate the impact of tempo- ral and spatial correlations on learned topographic maps. The emerging topographic maps showed patterns that were dependent on the temporal order of object views during training. Our results show that pooling over local neighborhoods of the to- pographic map enables invariant recognition. Chapter 3.2 focuses on the fourth hypothesis. There we examine how the adaptive feedback inhibition (AFI) can improve the ability of a network to discriminate between very similar patterns. The results show that with AFI learning is faster, and the network learns selective representations for stimuli with higher levels of overlap than without AFI. Results of chapter 3.1 suggest a functional role for topographic object representa- tions that are known to exist in the inferotemporal cortex, and suggests a mechanism for the development of such representations. The AFI model implements one aspect of predictive coding: subtraction of a prediction from the actual input of a system. The successful implementation in a biologically plausible network of spiking neurons shows that predictive coding can play a role in cortical circuits

    Models of learning in the visual system: dependence on retinal eccentricity

    Get PDF
    In the primary visual cortex of primates relatively more space is devoted to the representation of the central visual field in comparison to the representation of the peripheral visual field. Experimentally testable theories about the factors and mechanisms which may have determined this inhomogeneous mapping may provide valuable insights into general processing principles in the visual system. Therefore, I investigated to which visual situations this inhomogeneous representation of the visual field is well adapted, and which mechanisms could support its refinement and stabilization during individual development. Furthermore, I studied possible functional consequences of the inhomogeneous representation for visual processing at central and peripheral locations of the visual field. Vision plays an important role during navigation. Thus, visual processing should be well adapted to self-motion. Therefore, I assumed that spatially inhomogeneous retinal velocity distributions, caused by static objects during self-motion along the direction of gaze, are transformed on average into spatially homogeneous cortical velocity distributions. This would have the advantage that the cortical mechanisms, concerned with the processing of self-motion, can be identical in their spatial and temporal properties across the representation of the whole visual field. This is the case if the arrangement of objects relative to the observer corresponds to an ellipsoid with the observer in its center. I used the resulting flow field to train a network model of pulse coding neurons with a Hebbian learning rule. The distribution of the learned receptive fields is in agreement with the inhomogeneous cortical representation of the visual field. These results suggest that self motion may have played an important role in the evolution of the visual system and that the inhomogeneous cortical representation of the visual field can be refined and stabilized by Hebbian learning mechanisms during ontogenesis under natural viewing conditions. In addition to the processing of self-motion, an important task of the visual system is the grouping and segregation of local features within a visual scene into coherent objects. Therefore, I asked how the corresponding mechanisms depend on the represented position of the visual field. It is assumed that neuronal connections within the primary visual cortex subserve this grouping process. These connections develop after eye-opening in dependence on the visual input. How does the lateral connectivity depend on the represented position of the visual field? With increasing eccentricity, primary cortical receptive fields become larger and the cortical magnification of the visual field declines. Therefore, I investigated the spatial statistics of real-world scenes with respect to the spatial filter-properties of cortical neurons at different locations of the visual field. I show that correlations between collinearly arranged filters of the same size and orientation increase with increasing filter size. However, in distances relative to the size of the filters, collinear correlations decline more steeply with increasing distance for larger filters. This provides evidence against a homogeneous cortical connectivity across the whole visual field with respect to the coding of spatial object properties. Two major retino-cortical pathways are the magnocellular (M) and the parvocellular (P) pathways. While neurons along the M-pathway display temporal bandpass characteristics, neurons along the P-pathway show temporal lowpass characteristics. The ratio of P- to M-cells is not constant across the whole visual field, but declines with increasing retinal eccentricity. Therefore, I investigated how the different temporal response-properties of neurons of the M- and the P-pathways influence self-organization in the visual cortex, and discussed possible consequences for the coding of visual objects at different locations of the visual field. Specifically, I studied the influence of stimulus-motion on the self-organization of lateral connections in a network-model of spiking neurons with Hebbian learning. Low stimulus velocities lead to horizontal connections well adapted to the coding of the spatial structure within the visual input, while higher stimulus velocities lead to connections which subserve the coding of the stimulus movement direction. This suggests that the temporal lowpass properties of P-neurons subserve the coding of spatial stimulus attributes (form) in the visual cortex, while the temporal bandpass properties of M-neurons support the coding of spatio-temporal stimulus attributes (movement direction). Hence, the central representation of the visual field may be well adapted to the encoding of spatial object properties due to the strong contribution of P-neurons. The peripheral representation may be better adapted to the processing of motion

    Structure of receptive fields in a computational model of area 3b of primary sensory cortex

    Get PDF
    International audienceIn a previous work, we introduced a computational model of area 3b which is built upon the neural field theory and receives input from a simplified model of the index distal finger pad populated by a random set of touch receptors (Merkell cells). This model has been shown to be able to self-organize following the random stimulation of the finger pad model and to cope, to some extent, with cortical or skin lesions. The main hypothesis of the model is that learning of skin representations occurs at the thalamo-cortical level while cortico-cortical connections serve a stereotyped competition mechanism that shapes the receptive fields. To further assess this hypothesis and the validity of the model, we reproduced in this article the exact experimental protocol of DiCarlo et al. that has been used to examine the structure of receptive fields in area 3b of the primary somatosensory cortex. Using the same analysis toolset, the model yields consistent results, having most of the receptive fields to contain a single region of excitation and one to several regions of inhibition. We further proceeded our study using a dynamic competition that deeply influences the formation of the receptive fields. We hypothesized this dynamic competition to correspond to some form of somatosensory attention that may help to precisely shape the receptive fields. To test this hypothesis, we designed a protocol where an arbitrary region of interest is delineated on the index distal finger pad and we either (1) instructed explicitly the model to attend to this region (simulating an attentional signal) (2) preferentially trained the model on this region or (3) combined the two aforementioned protocols simultaneously. Results tend to confirm that dynamic competition leads to shrunken receptive fields and its joint interaction with intensive training promotes a massive receptive fields migration and shrinkage

    The reentry hypothesis: The putative interaction of the frontal eye field, ventrolateral prefrontal cortex, and areas V4, IT for attention and eye movement

    Get PDF
    Attention is known to play a key role in perception, including action selection, object recognition and memory. Despite findings revealing competitive interactions among cell populations, attention remains difficult to explain. The central purpose of this paper is to link up a large number of findings in a single computational approach. Our simulation results suggest that attention can be well explained on a network level involving many areas of the brain. We argue that attention is an emergent phenomenon that arises from reentry and competitive interactions. We hypothesize that guided visual search requires the usage of an object-specific template in prefrontal cortex to sensitize V4 and IT cells whose preferred stimuli match the target template. This induces a feature-specific bias and provides guidance for eye movements. Prior to an eye movement, a spatially organized reentry from occulomotor centers, specifically the movement cells of the frontal eye field, occurs and modulates the gain of V4 and IT cells. The processes involved are elucidated by quantitatively comparing the time course of simulated neural activity with experimental data. Using visual search tasks as an example, we provide clear and empirically testable predictions for the participation of IT, V4 and the frontal eye field in attention. Finally, we explain a possible physiological mechanism that can lead to non-flat search slopes as the result of a slow, parallel discrimination process
    corecore