24 research outputs found

    Spatial Modulation of Primate Inferotemporal Responses by Eye Position

    Get PDF
    Background: A key aspect of representations for object recognition and scene analysis in the ventral visual stream is the spatial frame of reference, be it a viewer-centered, object-centered, or scene-based coordinate system. Coordinate transforms from retinocentric space to other reference frames involve combining neural visual responses with extraretinal postural information. Methodology/Principal Findings: We examined whether such spatial information is available to anterior inferotemporal (AIT) neurons in the macaque monkey by measuring the effect of eye position on responses to a set of simple 2D shapes. We report, for the first time, a significant eye position effect in over 40 % of recorded neurons with small gaze angle shifts from central fixation. Although eye position modulates responses, it does not change shape selectivity. Conclusions/Significance: These data demonstrate that spatial information is available in AIT for the representation of objects and scenes within a non-retinocentric frame of reference. More generally, the availability of spatial information in AIT calls into questions the classic dichotomy in visual processing that associates object shape processing with ventra

    Scene and not seen: Noticing changes in complex environments

    No full text
    Purpose. To investigate- our ability to analyse dynamic, natural scenes with regard to the attributes of their constituent object 5. Methods. A street was videoed 12 tinier from a car travelling at 30km/h. The video; differed only by chaages made in the location, colour, orientation or presence of 3 fron 6 objects. Thirty new videos were generated by mixing two of the 12 videos. Mixing imolved recording 8 frames from video A, 8 frames of a uniform grey image, 8 frames from video B, a further 8 py images, and then back to video A, until 400 frames had been processed. fYalMIèe dsfl from each video sequence so as to avoid abrupt chang|| .a poeHipg,M>4bEîJNflaf testing, subjects saw the new videos and had V jjjha .étante ifopM the Kne. Results. Six naive subjects Meeting changes. Subjects' ability taaq$ppiilf feotïi by thé object changed and the type of wige ObAM Witt O tfete nevfe-r noticed whereas changes in a fr fft lifef BS flfff 5d at a similar diataece from the roadaide, were'Minwra noticed (98%) If noticed, changes were always ascribed To the correct object, but wg often misclassif ed as to the type of change (up to 30% for one of therobjects). The tirfleylelay between t le detection of each change was remarkably tjonsiatcat (as 3.5), The removal of object; was significantly more noticeable than claes ifisobjaettour, locajpn or orientation Comparable results have also been obtained in a virtual mode Frhis environment permitting better stimulus control and subject interaction. GjrtJEisions By using th; video mixing technique described above, it has been friri analyse the speed and accuracy of the analysis of a dynamic real-world scenssfsfe results demonstrate that il e instantaneous, full and detailed perception of a,8pi which we experience, is simply illusory, confirming other work on static scenes. Fwork also demonstrates that including multiple changes may cause changes to be ffoticed but falsely categorised, and th it the delay between each report of a change ia remarkably consistent, suggesting a tine-windowed shifting of attention

    Expressive Image Generator for an Emotion Extraction Engine

    No full text

    The effect of field of view and surface texture on driver steering performance

    No full text
    In the present study we investigated steering accuracy in terms of our ability to keep to the middle of a lane in a fixed-base driving simulator. In particular, we studied the dependence of steering accuracy on the visibility of different road sections, on the assumption that performance reflects the importance of different road sections in guiding steering. Other influences on steering accuracy - including the presence of textural cues, in the form of a textured road surface, and the horizontal field of view - were also investigated. We found that textural cues can improve accuracy in lateral lane control, presumably by providing strong optical flow, and that driving accuracy is little affected by increasing the horizontal field of view from 40 degrees to a full field of 180 degrees

    The influence of road markings and texture on steering accuracy in a driving simulator

    No full text
    Purpose: To test the importance of geometric information (splay angle, splay rate; Beall Loomis, Perception, 25: 481-494, 1996) and optical flow information provided by road surface texture in steering accuracy. Methods: Subjects drove along a simulated one-lane road using a force-feedback steering wheel. The road was defined by either (a) one continuous white line on a black background, (b) two continuous white lines as kerbs, or (c) two lines and road surface texture. Turns in the road appeared in random order. The subjects drove with constant velocity of 16.9 m/s (60.8 km/h). Lateral deviation from the center line, velocity and frequency content of the steering maneuvers served as performance indices. Results: Most subjects reported finding the task easier under condition (b) than under condition (a). Despite their impression the data suggested different, counter-intuitive results. Under condition (a) subjects performed more accurately (p < 0.01) than under condition (b), and steering on a textured road (c) appeared to be more accurate (p < 0.05) than on a road with no surface texture (a). Conclusions: The difference between conditions (a) and (b) may be due to the fact that apparent lateral shifts of the road markings (splay angle) decrease with the distance from the road's center line. Our results support the view that optical flow obtained from road texture (c) enhances steering performance. Currently we are testing increasing realism by shifting this paradigm to a 180 deg projection screen

    Learning influences the encoding of static and dynamic faces and their recognition across different spatial frequencies

    No full text
    Studies on face recognition have shown that observers are faster and more accurate at recognizing faces learned from dynamic sequences than those learned from static snapshots. Here, we investigated whether different learning procedures mediate the advantage for dynamic faces across different spatial frequencies. Observers learned two faces—one dynamic and one static—either in depth (Experiment 1) or using a more superficial learning procedure (Experiment 2). They had to search for the target faces in a subsequent visual search task. We used high-spatial frequency (HSF) and low-spatial frequency (LSF) filtered static faces during visual search to investigate whether the behavioural difference is based on encoding of different visual information for dynamically and statically learned faces. Such encoding differences may mediate the recognition of target faces in different spatial frequencies, as HSF may mediate featural face processing whereas LSF mediates configural processing. Our results show that the nature of the learning procedure alters how observers encode dynamic and static faces, and how they recognize those learned faces across different spatial frequencies. That is, these results point to a flexible usage of spatial frequencies tuned to the recognition task

    The role of binocular cues in scaling the retinal velocities of objects moving in space

    No full text
    The retinal velocity of an object moving in space depends on its distance from us. Thus, to interpret retinal motions the visual system must estimate an object's distance. Which sources of information are used? Here we consider the use of horizontal binocular disparity and vergence cues to distance. Specifically, we investigated whether disparity and vergence cues provide a depth distance estimate required to judge the physical velocity of objects moving at different distances (velocity constancy). Observers (n=6) viewed computer-rendered objects (either wire-frame spheres or small points) translating in the fronto-parallel plane. A trial consisted of two objects presented sequentially; observers judged whether the first or second moved faster. A staircase procedure was used to adjust the velocity of the second object to obtain the point of subjective equality between the two presented motions. Trials for objects moving with different velocities, directions and displacements were randomly interleaved. Velocity judgments were made for objects presented at different distances defined by disparity, vergence angle and changing size cues. Judgments of perceived velocity were systematically affected by the depth distance between the objects, with velocity matches close to those expected for perfect velocity constancy. This was true even for small points, suggesting that, in contrast a previous report (McKee Welch, Vision Research, 29, 553), disparity-defined depth can provide a sufficient distance cue for judgments of object velocity. However, settings made under conditions of different states of eye vergence had little effect on velocity matches. These results support a constancy mechanism for velocity that takes disparity-defined depth as an input, but that is little affected by static vergence posture
    corecore