62 research outputs found

    Behavior-oriented Vision for Biomimetic Flight Control

    No full text
    Most flying insects extract information about their spatial orientation and self-motion from visual cues such as global patterns of light intensity or optic flow. We present an insect-inspired neuronal filter model and show how optimal receptive fields for the detection of flight-relevant input patterns can be derived directly from the local receptor signals during typical flight behavior. Using a least squares principle, the receptive fields are optimally adapted to all behaviorally relevant, invariant properties of the agent and the environment. In closed-loop simulations in a highly realistic virtual environment we show that four independent, purely reactive mechanisms based on optimized receptive fields for attitude control, course stabilization, obstacle avoidance and altitude control, are sufficient for a fully autonomous and robust flight stabilization with all six degrees of freedom

    Grasp effects of the Ebbinghaus illusion: Obstacle-avoidance is not the explanation.

    No full text
    The perception-versus-action hypothesis states that visual information is processed in two different streams, one for visual awareness (or perception) and one for motor performance. Previous reports that the Ebbinghaus illusion deceives perception but not grasping seemed to indicate that this dichotomy between perception and action was fundamental enough to be reflected in the overt behavior of non-neurological, healthy humans. Contrary to this view we show that the Ebbinghaus illusion affects grasping to the same extent as perception. We also show that the grasp effects cannot be accounted for by non-perceptual obstacle avoidance mechanisms as has recently been suggested. Instead, even subtle variations of the Ebbinghaus illusion affect grasping in the same way as they affect perception. Our results suggest that the same signals are responsible for the perceptual effects and for the motor effects of the Ebbinghaus illusion. This casts doubt on one line of evidence, which used to strongly favor the perception-versus-action hypothesis

    Computational Modeling of Face Recognition Based on Psychophysical Experiments

    No full text
    Recent results from psychophysical studies are discussed which clearly show that face processing is not only holistic. Humans do encode face parts (component information) in addition to information about the spatial interrelationship of facial features (global configural information). Based on these findings we propose a computational architecture of face recognition, which implements a component and configural route for encoding and recognizing faces. Modeling results showed a striking similarity between human psychophysical data and the computational model. In addition, we could show that our framework is able to achieve good recognition performance even under large view rotations. Thus, our study is an example of how an interdisciplinary approach can provide a deeper understanding of cognitive processes and lead to further insights in human psychophysics as well as computer vision

    The use of facial motion and facial form during the processing of identity.

    Get PDF
    Previous research has shown that facial motion can carry information about age, gender, emotion and, at least to some extent, identity. By combining recent computer animation techniques with psychophysical methods, we show that during the computation of identity the human face recognition system integrates both types of information: individual non-rigid facial motion and individual facial form. This has important implications for cognitive and neural models of face perception, which currently emphasize a separation between the processing of invariant aspects (facial form) and changeable aspects (facial motion) of faces

    Extrinsic cues aid shape recognition from novel viewpoints

    No full text
    It has been shown previously that the visual recognition of shape is susceptible to the mismatch between the retinal input and its representation in long-term memory, especially when this mismatch arises from rotations in depth. One possibility is that the visual recognition system deals with such mismatch by a transformation of the input or the representation thereby bringing both into alignment for comparison. In either case, knowing what transformation has taken place should facilitate recognition. In natural circumstances, objects do not disappear and appear in different orientations inexplicably and an observer usually knows what to expect according to the context. This context includes the environment, and the history of the observers’ movements, which specify the transient relationship between the object, the environment and the observer. We used interactive computer graphics to study the effects of providing observers with either implicit or explicit indications of their view transformations in the recognition of a class of shape found previously to be highly view-dependent. Results show that these cues aid recognition to varying degrees but mostly for oblique views and primarily in terms of accuracy not response times. These results provide evidence for egocentric encoding of shape and suggest that knowing ones' transformation in view helps to reduce the problem space involved in matching a shape percept with a mental representation

    Perceptual organization of local elements into global shapes in the human visual cortex

    Get PDF
    The question of how local image features on the retina are integrated into perceived global shapes is central to our understanding of human visual perception. Psychophysical investigations have suggested that the emergence of a coherent visual percept, or a "good-Gestalt", is mediated by the perceptual organization of local features based on their similarity. However, the neural mechanisms that mediate unified shape perception in the human brain remain largely unknown. Using human fMRI, we demonstrate that not only higher occipitotemporal but also early retinotopic areas are involved in the perceptual organization and detection of global shapes. Specifically, these areas showed stronger fMRI responses to global contours consisting of collinear elements than to patterns of randomly oriented local elements. More importantly, decreased detection performance and fMRI activations were observed when misalignment of the contour elements disturbed the perceptual coherence of the contours. However, grouping of the misaligned contour elements by disparity resulted in increased performance and fMRI activations, suggesting that similar neural mechanisms may underlie grouping of local elements to global shapes by different visual features (orientation or disparity). Thus, these findings provide novel evidence for the role of both early feature integration processes and higher stages of visual analysis in coherent visual perception

    A Communication Task in HMD Virtual Environments: Speaker and Listener Movement Improves Communication

    No full text
    In this paper we present an experiment which investigates the influence of animated real-time self-avatars in immersive virtual environments on a communication task. Further we investigate the influence of 1st and 3rd person perspectives and the influence of tracked speaker and listener. We find that people perform best in our communication task when both the speaker and the listener have an animated self-avatar and when the speaker is in the 3rd person. The more people move the better they perform in the communication task. These results suggest that when two people in a virtual environment are animated then they do use gestures to communicate
    corecore