3 research outputs found

    Physical self-motion facilitates object recognition, but does not enable view-independence

    No full text
    It is well known that people have difficulties in recognizing an object from novel views as compared to learned views, resulting in increased response times and/or errors. This so-called view-dependency has been confirmed by many studies. In the natural environment, however, there are two ways of changing views of an object: one is to rotate an object in front of a stationary observer (object-movement), the other is for the observer to move around a stationary object (observer-movement). Note that almost all previous studies are based on the former procedure. Simons et al. [2002] criticized previous studies in this regard and examined the difference between object- and observer-movement directly. As a result, Simons et al. [2002] reported the elimination of this view-dependency when novel views resulted from observer-movement, instead of object-movement. They suggest the contribution of extra-retinal (vestibular and proprioceptive) information to object recognition. Recently, however, Zhao et al. [2007] repor ted that the observeramp;amp;lsquo;s movement from one view to another only decreased view-dependency without fully eliminating it. Furthermore, even this effect vanished for rotations of 90° instead of 50°. Larger rotations were not tested. The aim of the present study was to clarify the underlying mechanism of this phenomenon and to investigate larger angles of view change (45-180°, in 45° steps)

    Physical self-motion facilitates object recognition, but does not enable view-independence

    Get PDF
    It is well known that people have difficulties recognizing an object from novel views as compared to learned views, resulting in increased response times and errors. Simons et al (2002 Perception Psychophysics 64 521 - 530) reported, however, the elimination of this viewpoint dependence when novel views resulted from viewer movement instead of object movement. They suggest the contribution of extra-retinal information to object recognition. The aim of the present study was to clarify the underlying mechanism of this phenomenon and to investigate larger turning angles (45° - 180°, in 45° steps). Observers performed sequential-matching tasks with 5 original versus mirror-reversed objects (experiment 1) and with 10 different objects (experiment 2). Test views of the objects were manipulated either by viewer or object movement. Both experiments showed a significant overall advantage for viewer movements. Note, however, that performance was still viewpoint-dependent. Object recognition performance was also highly correlated with general mental spatial abilities assessed by a paper-and-pencil test. These results suggest an involvement of advantageous and cost-effective transformation mechanisms, but not a complete automatic spatial-updating mechanism, when observers move

    Physical Self-Motion Facilitates Object Recognition, but Does Not Enable View-Independence

    No full text
    It is well known that people have difficulties in recognizing an object from novel views as compared to learned views, resulting in increased response times and/or errors. This so-called view-dependency has been confirmed by many studies. In the natural environment, however, there are two ways of changing views of an object: one is to rotate an object in front of a stationary observer (object-movement), the other is for the observer to move around a stationary object (observer-movement). Simons et al. [1] criticized previous studies in this regard and examined the difference between object- and observer-movement directly. As a result, Simons et al. reported the elimination of this view-dependency when novel views resulted from observer-movement, instead of object-movement. They suggest the contribution of extraretinal (vestibular and proprioceptive) information to object recognition. Recently, however, Zhao et al. [2] reported that the observer’s movement from one view to another only decreased view-dependency without fully eliminating it. Furthermore, even this effect vanished for rotations of 90 instead of 50. The aim of the present study was to confirm the phenomenon in our virtual reality environment and to clarify the underlying mechanism further by using larger angles of view change (45-180, in 45 steps). Two experiments were conducted using an eMagin Z800 3D Visor head-mounted display that was tracked by 16 Vicon MX 13 motion capture cameras. Observers performed sequential-matching tasks. Five novel objects and five mirror-reversed versions of these objects were created by smoothing the edges of Shepard- Metzler’s objects. A mirror-reflected version of the learned object was used as a distractor in Experiment 1 (N=13), whereas one of the other (i.e., not mirror-reversed) objects was randomly selected on each trial as a distractor in Experiment 2 (N=15). Test views of the objects were manipulated either by viewer or object movement. Both experiments showed a significant overall advantage of viewer movements over object movements. Note, however, that performance was still viewpoint-dependent. These results suggest an involvement of partially advantageous and cost-effective transformation mechanisms, but not a complete automatic spatial-updating mechanism as proposed by Simons et al. [1], when observers move
    corecore