593 research outputs found

    Unmixing Binocular Signals

    Get PDF
    Incompatible images presented to the two eyes lead to perceptual oscillations in which one image at a time is visible. Early models portrayed this binocular rivalry as involving reciprocal inhibition between monocular representations of images, occurring at an early visual stage prior to binocular mixing. However, psychophysical experiments found conditions where rivalry could also occur at a higher, more abstract level of representation. In those cases, the rivalry was between image representations dissociated from eye-of-origin information, rather than between monocular representations from the two eyes. Moreover, neurophysiological recordings found the strongest rivalry correlate in inferotemporal cortex, a high-level, predominantly binocular visual area involved in object recognition, rather than early visual structures. An unresolved issue is how can the separate identities of the two images be maintained after binocular mixing in order for rivalry to be possible at higher levels? Here we demonstrate that after the two images are mixed, they can be unmixed at any subsequent stage using a physiologically plausible non-linear signal-processing algorithm, non-negative matrix factorization, previously proposed for parsing object parts during object recognition. The possibility that unmixed left and right images can be regenerated at late stages within the visual system provides a mechanism for creating various binocular representations and interactions de novo in different cortical areas for different purposes, rather than inheriting then from early areas. This is a clear example how non-linear algorithms can lead to highly non-intuitive behavior in neural information processing

    No binocular rivalry in the LGN of alert macaque monkeys

    Get PDF
    AbstractOrthogonal drifting gratings were presented binocularly to alert macaque monkeys in an attempt to find neural correlates of binocular rivalry. Gratings were centered over lateral geniculate nucleus (LGN) receptive fields and the corresponding points for the opposite eye. The only task of the monkey was to fixate. We found no difference between the responses of LGN neurons under rivalrous and nonrivalrous conditions, as determined by examining the ratios of their respective power spectra. There was, however, a curious “temporal afterimage” effect in which cell responses continued to be modulated at the drift frequency of the grating for several seconds after the grating disappeared

    Population Coding of Visual Space: Comparison of Spatial Representations in Dorsal and Ventral Pathways

    Get PDF
    Although the representation of space is as fundamental to visual processing as the representation of shape, it has received relatively little attention from neurophysiological investigations. In this study we characterize representations of space within visual cortex, and examine how they differ in a first direct comparison between dorsal and ventral subdivisions of the visual pathways. Neural activities were recorded in anterior inferotemporal cortex (AIT) and lateral intraparietal cortex (LIP) of awake behaving monkeys, structures associated with the ventral and dorsal visual pathways respectively, as a stimulus was presented at different locations within the visual field. In spatially selective cells, we find greater modulation of cell responses in LIP with changes in stimulus position. Further, using a novel population-based statistical approach (namely, multidimensional scaling), we recover the spatial map implicit within activities of neural populations, allowing us to quantitatively compare the geometry of neural space with physical space. We show that a population of spatially selective LIP neurons, despite having large receptive fields, is able to almost perfectly reconstruct stimulus locations within a low-dimensional representation. In contrast, a population of AIT neurons, despite each cell being spatially selective, provide less accurate low-dimensional reconstructions of stimulus locations. They produce instead only a topologically (categorically) correct rendition of space, which nevertheless might be critical for object and scene recognition. Furthermore, we found that the spatial representation recovered from population activity shows greater translation invariance in LIP than in AIT. We suggest that LIP spatial representations may be dimensionally isomorphic with 3D physical space, while in AIT spatial representations may reflect a more categorical representation of space (e.g., “next to” or “above”)

    Population Coding of Visual Space: Modeling

    Get PDF
    We examine how the representation of space is affected by receptive field (RF) characteristics of the encoding population. Spatial responses were defined by overlapping Gaussian RFs. These responses were analyzed using multidimensional scaling to extract the representation of global space implicit in population activity. Spatial representations were based purely on firing rates, which were not labeled with RF characteristics (tuning curve peak location, for example), differentiating this approach from many other population coding models. Because responses were unlabeled, this model represents space using intrinsic coding, extracting relative positions amongst stimuli, rather than extrinsic coding where known RF characteristics provide a reference frame for extracting absolute positions. Two parameters were particularly important: RF diameter and RF dispersion, where dispersion indicates how broadly RF centers are spread out from the fovea. For large RFs, the model was able to form metrically accurate representations of physical space on low-dimensional manifolds embedded within the high-dimensional neural population response space, suggesting that in some cases the neural representation of space may be dimensionally isomorphic with 3D physical space. Smaller RF sizes degraded and distorted the spatial representation, with the smallest RF sizes (present in early visual areas) being unable to recover even a topologically consistent rendition of space on low-dimensional manifolds. Finally, although positional invariance of stimulus responses has long been associated with large RFs in object recognition models, we found RF dispersion rather than RF diameter to be the critical parameter. In fact, at a population level, the modeling suggests that higher ventral stream areas with highly restricted RF dispersion would be unable to achieve positionally-invariant representations beyond this narrow region around fixation

    Characteristics of Eye-Position Gain Field Populations Determine Geometry of Visual Space

    Get PDF
    We have previously demonstrated differences in eye-position spatial maps for anterior inferotemporal cortex (AIT) in the ventral stream and lateral intraparietal cortex (LIP) in the dorsal stream, based on population decoding of gaze angle modulations of neural visual responses (i.e., eye-position gain fields). Here we explore the basis of such spatial encoding differences through modeling of gain field characteristics. We created a population of model neurons, each having a different eye-position gain field. This population was used to reconstruct eye-position visual space using multidimensional scaling. As gain field shapes have never been well established experimentally, we examined different functions, including planar, sigmoidal, elliptical, hyperbolic, and mixtures of those functions. All functions successfully recovered positions, indicating weak constraints on allowable gain field shapes. We then used a genetic algorithm to modify the characteristics of model gain field populations until the recovered spatial maps closely matched those derived from monkey neurophysiological data in AIT and LIP. The primary differences found between model AIT and LIP gain fields were that AIT gain fields were more foveally dominated. That is, gain fields in AIT operated on smaller spatial scales and smaller dispersions than in LIP. Thus we show that the geometry of eye-position visual space depends on the population characteristics of gain fields, and that differences in gain field characteristics for different cortical areas may underlie differences in the representation of space

    Towards building a more complex view of the lateral geniculate nucleus: Recent advances in understanding its role

    Get PDF
    The lateral geniculate nucleus (LGN) has often been treated in the past as a linear filter that adds little to retinal processing of visual inputs. Here we review anatomical, neurophysiological, brain imaging, and modeling studies that have in recent years built up a much more complex view of LGN . These include effects related to nonlinear dendritic processing, cortical feedback, synchrony and oscillations across LGN populations, as well as involvement of LGN in higher level cognitive processing. Although recent studies have provided valuable insights into early visual processing including the role of LGN, a unified model of LGN responses to real-world objects has not yet been developed. In the light of recent data, we suggest that the role of LGN deserves more careful consideration in developing models of high-level visual processing

    A low-dimensional model of binocular rivalry using winnerless competition

    Get PDF
    Copyright © 2010 Elsevier. NOTICE: this is the author’s version of a work that was accepted for publication in Physica D: Nonlinear Phenomena. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published in Physica D: Nonlinear Phenomena Vol. 239 (2010), DOI: 10.1016/j.physd.2009.06.018Notes: The article presents a novel biologically-inspired mathematical model of perceptual instability in binocular rivalry. I took part in the development of the model in the relation to extant models of binocular rivalry, and wrote the introduction and the discussion sections of the paper. Peter Ashwin ran the simulations and wrote the sections of the paper that present: the model in mathematical formalism, the results from simulations and the related mathematical proofs.We discuss a novel minimal model for binocular rivalry (and more generally perceptual dominance) effects. The model has only three state variables, but nonetheless exhibits a wide range of input and noise-dependent switching. The model has two reciprocally inhibiting input variables that represent perceptual processes active during the recognition of one of the two possible states and a third variable that represents the perceived output. Sensory inputs only affect the input variables. We observe, for rivalry-inducing inputs, the appearance of winnerless competition in the perceptual system. This gives rise to a behaviour that conforms to well-known principles describing binocular rivalry (the Levelt propositions, in particular proposition IV: monotonic response of residence time as a function of image contrast) down to very low levels of stimulus intensity

    YORP and Yarkovsky effects in asteroids (1685) Toro, (2100) Ra-Shalom, (3103) Eger, and (161989) Cacus

    Full text link
    The rotation states of small asteroids are affected by a net torque arising from an anisotropic sunlight reflection and thermal radiation from the asteroids' surfaces. On long timescales, this so-called YORP effect can change asteroid spin directions and their rotation periods. We analyzed lightcurves of four selected near-Earth asteroids with the aim of detecting secular changes in their rotation rates that are caused by YORP. We use the lightcurve inversion method to model the observed lightcurves and include the change in the rotation rate dω/dt\mathrm{d} \omega / \mathrm{d} t as a free parameter of optimization. We collected more than 70 new lightcurves. For asteroids Toro and Cacus, we used thermal infrared data from the WISE spacecraft and estimated their size and thermal inertia. We also used the currently available optical and radar astrometry of Toro, Ra-Shalom, and Cacus to infer the Yarkovsky effect. We detected a YORP acceleration of dω/dt=(1.9±0.3)×108radd2\mathrm{d}\omega / \mathrm{d} t = (1.9 \pm 0.3) \times 10^{-8}\,\mathrm{rad}\,\mathrm{d}^{-2} for asteroid Cacus. For Toro, we have a tentative (2σ2\sigma) detection of YORP from a significant improvement of the lightcurve fit for a nonzero value of dω/dt=3.0×109radd2\mathrm{d}\omega / \mathrm{d} t = 3.0 \times 10^{-9}\,\mathrm{rad}\,\mathrm{d}^{-2}. For asteroid Eger, we confirmed the previously published YORP detection with more data and updated the YORP value to (1.1±0.5)×108radd2(1.1 \pm 0.5) \times 10^{-8}\,\mathrm{rad}\,\mathrm{d}^{-2}. We also updated the shape model of asteroid Ra-Shalom and put an upper limit for the change of the rotation rate to dω/dt1.5×108radd2|\mathrm{d}\omega / \mathrm{d} t| \lesssim 1.5 \times 10^{-8}\,\mathrm{rad}\,\mathrm{d}^{-2}. Ra-Shalom has a greater than 3σ3\sigma Yarkovsky detection with a theoretical value consistent with observations assuming its size and/or density is slightly larger than the nominally expected values

    LAMAlice : A Mini Mobile Robot for Planetary Exploration

    Get PDF
    corecore