32 research outputs found

    Conditions for Viewpoint Dependent Face Recognition

    Get PDF
    Poggio and Vetter (1992) showed that learning one view of a bilaterally symmetric object could be sufficient for its recognition, if this view allows the computation of a symmetric, "virtual," view. Faces are roughly bilaterally symmetric objects. Learning a side-view--which always has a symmetric view--should allow for better generalization performances than learning the frontal view. Two psychophysical experiments tested these predictions. Stimuli were views of shaded 3D models of laser-scanned faces. The first experiment tested whether a particular view of a face was canonical. The second experiment tested which single views of a face give rise to best generalization performances. The results were compatible with the symmetry hypothesis: Learning a side view allowed better generalization performances than learning the frontal view

    Effects of anxiety and cognitive load on instrument scanning behavior in a flight simulation

    Get PDF
    Previous research has rarely examined the combined influence of anxiety and cognitive load on gaze behavior and performance whilst undertaking complex perceptual-motor tasks. In the current study, participants performed an aviation instrument landing task in neutral and anxiety conditions, while performing a low or high cognitive load auditory n-back task. Both self-reported anxiety and heart rate increased from neutral conditions indicating that anxiety was successfully manipulated. Response accuracy and reaction time for the auditory task indicated that cognitive load was also successfully manipulated. Cognitive load negatively impacted flight performance and the frequency of gaze transitions between areas of interest. Performance was maintained in anxious conditions, with a concomitant decrease in n-back reaction time suggesting that this was due to an increase in mental effort. Analyses of individual responses to the anxiety manipulation revealed that changes in anxiety levels from neutral to anxiety conditions were positively correlated with changes in visual scanning entropy, which is a measure of the randomness of gaze behavior, but only when cognitive load was high. This finding lends support for an interactive effect of cognitive anxiety and cognitive load on attentional control

    Phenomenal competition for poses of the human head

    No full text
    Abstract. We show a cylindrical projection of the human head. This projection is ambiguous with respect to head pose. Viewing such a projection produces perceptual competition for a few discrete views. In a number of studies it is suggested that the brain may represent head pose in terms of a discrete set of preferred views. Exactly what these views are and how their representations enable visual face recognition and pose estimation is not entirely clear. On the one hand, it is easier to find neurons in the primate inferotemporal cortex that are more selective for head-on, profile, or back views than other angles (Perrett et al 1991). On the other hand, psychophysical studies have shown that human face recognition generalizes better from a learned view near 45° about the vertical axis than from other views (Bruce and Valentine 1987; Troje and BulthofF, in press). This latter observation is consistent with theoretical predictions based on virtual views for symmetric objects (Vetter et al 1993). In either case, one might expect that if an image of a human head is presented in such a way as to make pose assignment ambiguous, we might visually experience a competition for preferred poses

    Methods for Multiloop Identification of Visual and Neuromuscular Pilot Responses

    Get PDF
    In this paper, identification methods are proposed to estimate the neuromuscular and visual responses of a multiloop pilot model. A conventional and widely used technique for simultaneous identification of the neuromuscular and visual systems makes use of cross-spectral density estimates. This paper shows that this technique requires a specific noninterference hypothesis, often implicitly assumed, that may be difficult to meet during actual experimental designs. A mathematical justification of the necessity of the noninterference hypothesis is given. Furthermore, two methods are proposed that do not have the same limitations. The first method is based on autoregressive models with exogenous inputs, whereas the second one combines cross-spectral estimators with interpolation in the frequency domain. The two identification methods are validated by offline simulations and contrasted to the classic method. The results reveal that the classic method fails when the noninterference hypothesis is not fulfilled; on the contrary, the two proposed techniques give reliable estimates. Finally, the three identification methods are applied to experimental data from a closed-loop control task with pilots. The two proposed techniques give comparable estimates, different from those obtained by the classic method. The differences match those found with the simulations. Thus, the two identification methods provide a good alternative to the classic method and make it possible to simultaneously estimate human's neuromuscular and visual responses in cases where the classic method fails

    Decoding visual roughness perception: an fMRI study

    No full text
    The neural substrates of tactile roughness perception have been investigated by many neuroimaging studies, while relatively little effort has been devoted to the investigation of neural representations of visually perceived roughness. In this human fMRI study, we looked for neural activity patterns that could be attributed to five different roughness intensity levels when the stimuli were perceived visually, i.e., in absence of any tactile sensation. During functional image acquisition, participants viewed video clips displaying a right index fingertip actively exploring the sandpapers that had been used for the behavioural experiment. A whole brain multivariate pattern analysis found four brain regions in which visual roughness intensities could be decoded: the bilateral posterior parietal cortex (PPC), the primary somatosensory cortex (S1) extending to the primary motor cortex (M1) in the right hemisphere, and the inferior occipital gyrus (IOG). In a follow-up analysis, we tested for correlations between the decoding accuracies and the tactile roughness discriminability obtained from a preceding behavioural experiment. We could not find any correlation between both although, during scanning, participants were asked to recall the tactilely perceived roughness of the sandpapers. We presume that a better paradigm is needed to reveal any potential visuo-tactile convergence. However, the present study identified brain regions that may subserve the discrimination of different intensities of visual roughness. This finding may contribute to elucidate the neural mechanisms related to the visual roughness perception in the human brain.© 2018 The Author(s). Published by Informa UK Limited, trading as Taylor & Francis Group

    Orientation dependence in the recognition of familiar and novel views of three-dimensional objects

    No full text
    We report four experiments that investigated the representation of novel three-dimensional (3D) objects by the human visual system. In the first experiment, canonical views were demonstrated for novel objects seen equally often from all test viewpoints. The next two experiments showed that the canonical views persisted under repeated testing, and in the presence of a variety of depth cues, including binocular stereo. The fourth experiment probed the ability of subjects to generalize recognition to unfamiliar views of objects previously seen at a limited range of attitudes. Both mono and stereo conditions yielded the same increase in the error rate with misorientation relative to the training attitude. Taken together, these results support the notion that 3D objects are represented by multiple specific views, possibly augmented by partial viewer-centered 3D information. 3D object recognition Canonical views Novel views Stereo 1

    Viewpoint-Specific Representations in Three-Dimensional Object Recognition

    No full text
    We report a series of psychophysical experiments that explore different aspects of the problem of object representation and recognition in human vision. Contrary to the paradigmatic view which holds that the representations are three-dimensional and object-centered, the results consistently support the notion of view-specific representations that include at most partial depth information. In simulated experiments that involved the same stimuli shown to the human subjects, computational models built around two-dimensional multiple-view representations replicated our main psychophysical results, including patterns of generalization errors and the time course of perceptual learning

    Interaction of Different Modules in Depth Perception: Stereo and Shading

    No full text
    A method has been developed to measure the perceived depth of computer generated images of simple solid objects. Computer graphic techniques allow for independent control of different depth queues (stereo, shading, and texture) and enable the investigator thereby to study psychophysically the interaction of modules for depth perception. Accumulation of information from shading and stereo and vetoing of depth from shading by edge information have been found. Cooperativity and other types of interactions are discussed. If intensity edges are missing, as in a smooth-shaded surface, the image intensities themselves could be used for stereo matching. The results are compared with computer vision algorithms for both single modules and their integration for 3D vision
    corecore