9,713 research outputs found

    What does the amygdala contribute to social cognition?

    Get PDF
    The amygdala has received intense recent attention from neuroscientists investigating its function at the molecular, cellular, systems, cognitive, and clinical level. It clearly contributes to processing emotionally and socially relevant information, yet a unifying description and computational account have been lacking. The difficulty of tying together the various studies stems in part from the sheer diversity of approaches and species studied, in part from the amygdala's inherent heterogeneity in terms of its component nuclei, and in part because different investigators have simply been interested in different topics. Yet, a synthesis now seems close at hand in combining new results from social neuroscience with data from neuroeconomics and reward learning. The amygdala processes a psychological stimulus dimension related to saliency or relevance; mechanisms have been identified to link it to processing unpredictability; and insights from reward learning have situated it within a network of structures that include the prefrontal cortex and the ventral striatum in processing the current value of stimuli. These aspects help to clarify the amygdala's contributions to recognizing emotion from faces, to social behavior toward conspecifics, and to reward learning and instrumental behavior

    Fixation prediction with a combined model of bottom-up saliency and vanishing point

    Full text link
    By predicting where humans look in natural scenes, we can understand how they perceive complex natural scenes and prioritize information for further high-level visual processing. Several models have been proposed for this purpose, yet there is a gap between best existing saliency models and human performance. While many researchers have developed purely computational models for fixation prediction, less attempts have been made to discover cognitive factors that guide gaze. Here, we study the effect of a particular type of scene structural information, known as the vanishing point, and show that human gaze is attracted to the vanishing point regions. We record eye movements of 10 observers over 532 images, out of which 319 have vanishing points. We then construct a combined model of traditional saliency and a vanishing point channel and show that our model outperforms state of the art saliency models using three scores on our dataset.Comment: arXiv admin note: text overlap with arXiv:1512.0172

    Objects predict fixations better than early saliency

    Get PDF
    Humans move their eyes while looking at scenes and pictures. Eye movements correlate with shifts in attention and are thought to be a consequence of optimal resource allocation for high-level tasks such as visual recognition. Models of attention, such as “saliency maps,” are often built on the assumption that “early” features (color, contrast, orientation, motion, and so forth) drive attention directly. We explore an alternative hypothesis: Observers attend to “interesting” objects. To test this hypothesis, we measure the eye position of human observers while they inspect photographs of common natural scenes. Our observers perform different tasks: artistic evaluation, analysis of content, and search. Immediately after each presentation, our observers are asked to name objects they saw. Weighted with recall frequency, these objects predict fixations in individual images better than early saliency, irrespective of task. Also, saliency combined with object positions predicts which objects are frequently named. This suggests that early saliency has only an indirect effect on attention, acting through recognized objects. Consequently, rather than treating attention as mere preprocessing step for object recognition, models of both need to be integrated
    corecore