1,248 research outputs found

    Spatio-Temporal Interpolation Is Accomplished by Binocular Form and Motion Mechanisms

    Get PDF
    Spatio-temporal interpolation describes the ability of the visual system to perceive shapes as whole figures (Gestalts), even if they are moving behind narrow apertures, so that only thin slices of them meet the eye at any given point in time. The interpolation process requires registration of the form slices, as well as perception of the shape's global motion, in order to reassemble the slices in the correct order. The commonly proposed mechanism is a spatio-temporal motion detector with a receptive field, for which spatial distance and temporal delays are interchangeable, and which has generally been regarded as monocular. Here we investigate separately the nature of the motion and the form detection involved in spatio-temporal interpolation, using dichoptic masking and interocular presentation tasks. The results clearly demonstrate that the associated mechanisms for both motion and form are binocular rather than monocular. Hence, we question the traditional view according to which spatio-temporal interpolation is achieved by monocular first-order motion-energy detectors in favour of models featuring binocular motion and form detection

    Smooth Pursuit Eye Movements Improve Temporal Resolution for Color Perception

    Get PDF
    Human observers see a single mixed color (yellow) when different colors (red and green) rapidly alternate. Accumulating evidence suggests that the critical temporal frequency beyond which chromatic fusion occurs does not simply reflect the temporal limit of peripheral encoding. However, it remains poorly understood how the central processing controls the fusion frequency. Here we show that the fusion frequency can be elevated by extra-retinal signals during smooth pursuit. This eye movement can keep the image of a moving target in the fovea, but it also introduces a backward retinal sweep of the stationary background pattern. We found that the fusion frequency was higher when retinal color changes were generated by pursuit-induced background motions than when the same retinal color changes were generated by object motions during eye fixation. This temporal improvement cannot be ascribed to a general increase in contrast gain of specific neural mechanisms during pursuit, since the improvement was not observed with a pattern flickering without changing position on the retina or with a pattern moving in the direction opposite to the background motion during pursuit. Our findings indicate that chromatic fusion is controlled by a cortical mechanism that suppresses motion blur. A plausible mechanism is that eye-movement signals change spatiotemporal trajectories along which color signals are integrated so as to reduce chromatic integration at the same locations (i.e., along stationary trajectories) on the retina that normally causes retinal blur during fixation

    Cortical Contributions to Saccadic Suppression

    Get PDF
    The stability of visual perception is partly maintained by saccadic suppression: the selective reduction of visual sensitivity that accompanies rapid eye movements. The neural mechanisms responsible for this reduced perisaccadic visibility remain unknown, but the Lateral Geniculate Nucleus (LGN) has been proposed as a likely site. Our data show, however, that the saccadic suppression of a target flashed in the right visual hemifield increased with an increase in background luminance in the left visual hemifield. Because each LGN only receives retinal input from a single hemifield, this hemifield interaction cannot be explained solely on the basis of neural mechanisms operating in the LGN. Instead, this suggests that saccadic suppression must involve processing in higher level cortical areas that have access to a considerable part of the ipsilateral hemifield

    Postdictive Modulation of Visual Orientation

    Get PDF
    The present study investigated how visual orientation is modulated by subsequent orientation inputs. Observers were presented a near-vertical Gabor patch as a target, followed by a left- or right-tilted second Gabor patch as a distracter in the spatial vicinity of the target. The task of the observers was to judge whether the target was right- or left-tilted (Experiment 1) or whether the target was vertical or not (Supplementary experiment). The judgment was biased toward the orientation of the distracter (the postdictive modulation of visual orientation). The judgment bias peaked when the target and distracter were temporally separated by 100 ms, indicating a specific temporal mechanism for this phenomenon. However, when the visibility of the distracter was reduced via backward masking, the judgment bias disappeared. On the other hand, the low-visibility distracter could still cause a simultaneous orientation contrast, indicating that the distracter orientation is still processed in the visual system (Experiment 2). Our results suggest that the postdictive modulation of visual orientation stems from spatiotemporal integration of visual orientation on the basis of a slow feature matching process

    Integration across time determines path deviation discrimination for moving objects.

    Get PDF
    YesBackground: Human vision is vital in determining our interaction with the outside world. In this study we characterize our ability to judge changes in the direction of motion of objects-a common task which can allow us either to intercept moving objects, or else avoid them if they pose a threat. Methodology/Principal Findings: Observers were presented with objects which moved across a computer monitor on a linear path until the midline, at which point they changed their direction of motion, and observers were required to judge the direction of change. In keeping with the variety of objects we encounter in the real world, we varied characteristics of the moving stimuli such as velocity, extent of motion path and the object size. Furthermore, we compared performance for moving objects with the ability of observers to detect a deviation in a line which formed the static trace of the motion path, since it has been suggested that a form of static memory trace may form the basis for these types of judgment. The static line judgments were well described by a 'scale invariant' model in which any two stimuli which possess the same two-dimensional geometry (length/width) result in the same level of performance. Performance for the moving objects was entirely different. Irrespective of the path length, object size or velocity of motion, path deviation thresholds depended simply upon the duration of the motion path in seconds. Conclusions/Significance: Human vision has long been known to integrate information across space in order to solve spatial tasks such as judgment of orientation or position. Here we demonstrate an intriguing mechanism which integrates direction information across time in order to optimize the judgment of path deviation for moving objects.Wellcome Trust, Leverhulme Trust, NI

    The Effect of Viewing Eccentricity on Enumeration

    Get PDF
    Visual acuity and contrast sensitivity progressively diminish with increasing viewing eccentricity. Here we evaluated how visual enumeration is affected by visual eccentricity, and whether subitizing capacity, the accurate enumeration of a small number (∼3) of items, decreases with more eccentric viewing. Participants enumerated gratings whose (1) stimulus size was constant across eccentricity, and (2) whose stimulus size scaled by a cortical magnification factor across eccentricity. While we found that enumeration accuracy and precision decreased with increasing eccentricity, cortical magnification scaling of size neutralized the deleterious effects of increasing eccentricity. We found that size scaling did not affect subitizing capacities, which were nearly constant across all eccentricities. We also found that size scaling modulated the variation coefficients, a normalized metric of enumeration precision, defined as the standard deviation divided by the mean response. Our results show that the inaccuracy and imprecision associated with increasing viewing eccentricity is due to limitations in spatial resolution. Moreover, our results also support the notion that the precise number system is restricted to small numerosities (represented by the subitizing limit), while the approximate number system extends across both small and large numerosities (indexed by variation coefficients) at large eccentricities

    Haptic subitizing across the fingers

    Get PDF
    Numerosity judgments of small sets of items (≤ 3) are generally fast and errorfree, while response times and error rates increase rapidly for larger numbers of items. We investigated an efficient process used for judging small numbers of items (known as subitizing) in active touch. We hypothesized that this efficient process for numerosity judgment might be related to stimulus properties that allow for efficient (parallel) search. Our results showed that subitizing was not possible forraised lines among flat surfaces, whereas this type of stimulus could be detected in parallel over the fingers. However, subitizing was possible when the number of fingers touching a surface had to be judged while the other fingers were lowered in mid-air. In the latter case, the lack of tactile input is essential, since subitizing was not enabled by differences in proprioceptive information from the fingers. Our results show that subitizing using haptic information from the fingers is possible only whensome fingers receive tactile information while other fingers do not

    MIRO: A robot “Mammal” with a biomimetic brain-based control system

    Get PDF
    We describe the design of a novel commercial biomimetic brain-based robot, MIRO, developed as a prototype robot companion. The MIRO robot is animal-like in several aspects of its appearance, however, it is also biomimetic in a more significant way, in that its control architecture mimics some of the key principles underlying the design of the mammalian brain as revealed by neuroscience. Specifically, MIRO builds on decades of previous work in developing robots with brain-based control systems using a layered control architecture alongside centralized mechanisms for integration and action selection. MIRO’s control system operates across three core processors, P1-P3, that mimic aspects of spinal cord, brainstem, and forebrain functionality respectively. Whilst designed as a versatile prototype for next generation companion robots, MIRO also provides developers and researchers with a new platform for investigating the potential advantages of brain-based control

    Visual adaptation alters the apparent speed of real-world actions

    Get PDF
    The apparent physical speed of an object in the field of view remains constant despite variations in retinal velocity due to viewing conditions (velocity constancy). For example, people and cars appear to move across the field of view at the same objective speed regardless of distance. In this study a series of experiments investigated the visual processes underpinning judgements of objective speed using an adaptation paradigm and video recordings of natural human locomotion. Viewing a video played in slow-motion for 30seconds caused participants to perceive subsequently viewed clips played at standard speed as too fast, so playback had to be slowed down in order for it to appear natural; conversely after viewing fast-forward videos for 30seconds, playback had to be speeded up in order to appear natural. The perceived speed of locomotion shifted towards the speed depicted in the adapting video (‘re-normalisation’). Results were qualitatively different from those obtained in previously reported studies of retinal velocity adaptation. Adapting videos that were scrambled to remove recognizable human figures or coherent motion caused significant, though smaller shifts in apparent locomotion speed, indicating that both low-level and high-level visual properties of the adapting stimulus contributed to the changes in apparent speed
    corecore