656 research outputs found

    Generating Sequence of Eye Fixations Using Decision-theoretic Attention Model

    Get PDF
    Human eyes scan images with serial eye fixations. We proposed a novel attention selectivity model for the automatic generation of eye fixations on 2D static scenes. An activation map was first computed by extracting primary visual features and detecting meaningful objects from the scene. An adaptable retinal filter was applied on this map to generate Regions of Interest (ROIs), whose locations corresponded to those of activation peaks and whose sizes were estimated by an iterative adjustment algorithm. The focus of attention was moved serially over the detected ROIs by a decision-theoretic mechanism. The generated sequence of eye fixations was determined from the perceptual benefit function based on perceptual costs and rewards, while the time distribution of different ROIs was estimated by a memory learning and decaying model. Finally, to demonstrate the effectiveness of the proposed attention model, the gaze tracking results of different human subjects and the simulated eye fixation shifting were compared

    Finding any Waldo: zero-shot invariant and efficient visual search

    Full text link
    Searching for a target object in a cluttered scene constitutes a fundamental challenge in daily vision. Visual search must be selective enough to discriminate the target from distractors, invariant to changes in the appearance of the target, efficient to avoid exhaustive exploration of the image, and must generalize to locate novel target objects with zero-shot training. Previous work has focused on searching for perfect matches of a target after extensive category-specific training. Here we show for the first time that humans can efficiently and invariantly search for natural objects in complex scenes. To gain insight into the mechanisms that guide visual search, we propose a biologically inspired computational model that can locate targets without exhaustive sampling and generalize to novel objects. The model provides an approximation to the mechanisms integrating bottom-up and top-down signals during search in natural scenes.Comment: Number of figures: 6 Number of supplementary figures: 1
    corecore