104,869 research outputs found

    Picture recognition in animals and in humans : a review

    Get PDF
    The question of object–picture recognition has received relatively little attention in both human and comparative psychology; a paradoxical situation given the important use of image technology (e.g. slides, digitised pictures) made by neuroscientists in their experimental investigation of visual cognition. The present review examines the relevant literature pertaining to the question of the correspondence between and:or equivalence of real objects and their pictorial representations in animals and humans. Two classes of reactions towards pictures will be considered in turn: acquired responses in picture recognition experiments and spontaneous responses to pictures of biologically relevant objects (e.g. prey or conspecifics). Our survey will lead to the conclusion that humans show evidence of picture recognition from an early age; this recognition is, however, facilitated by prior exposure to pictures. This same exposure or training effect appears also to be necessary in nonhuman primates as well as in other mammals and in birds. Other factors are also identified as playing a role in the acquired responses to pictures: familiarity with and nature of the stimulus objects, presence of motion in the image, etc. Spontaneous and adapted reactions to pictures are a wide phenomenon present in different phyla including invertebrates but in most instances, this phenomenon is more likely to express confusion between objects and pictures than discrimination and active correspondence between the two. Finally, given the nature of a picture (e.g. bi-dimensionality, reduction of cues related to depth), it is suggested that object–picture recognition be envisioned in various levels, with true equivalence being a limited case, rarely observed in the behaviour of animals and even humans

    Appearance-based localization for mobile robots using digital zoom and visual compass

    Get PDF
    This paper describes a localization system for mobile robots moving in dynamic indoor environments, which uses probabilistic integration of visual appearance and odometry information. The approach is based on a novel image matching algorithm for appearance-based place recognition that integrates digital zooming, to extend the area of application, and a visual compass. Ambiguous information used for recognizing places is resolved with multiple hypothesis tracking and a selection procedure inspired by Markov localization. This enables the system to deal with perceptual aliasing or absence of reliable sensor data. It has been implemented on a robot operating in an office scenario and the robustness of the approach demonstrated experimentally

    Encoding natural movement as an agent-based system: an investigation into human pedestrian behaviour in the built environment

    Get PDF
    Gibson's ecological theory of perception has received considerable attention within psychology literature, as well as in computer vision and robotics. However, few have applied Gibson's approach to agent-based models of human movement, because the ecological theory requires that individuals have a vision-based mental model of the world, and for large numbers of agents this becomes extremely expensive computationally. Thus, within current pedestrian models, path evaluation is based on calibration from observed data or on sophisticated but deterministic route-choice mechanisms; there is little open-ended behavioural modelling of human-movement patterns. One solution which allows individuals rapid concurrent access to the visual information within an environment is an 'exosomatic visual architecture" where the connections between mutually visible locations within a configuration are prestored in a lookup table. Here we demonstrate that, with the aid of an exosomatic visual architecture, it is possible to develop behavioural models in which movement rules originating from Gibson's principle of affordance are utilised. We apply large numbers of agents programmed with these rules to a built-environment example and show that, by varying parameters such as destination selection, field of view, and steps taken between decision points, it is possible to generate aggregate movement levels very similar to those found in an actual building context
    • 

    corecore