374,784 research outputs found

    On the dissociation between compound and present / absent tasks in visual search: Intertrail priming is ambiguity-driven.

    Get PDF
    Visual search is speeded when the target-defining property (a feature- or dimension difference relative to the distractors) is repeated relative to when it changes. It is thought that automatic and implicit intertrial priming mechanisms underlie this effect. However, intertrial priming has been found to be less robust in compound search tasks (in which the response property is unrelated to the target-defining property) than in present/absent search tasks (in which the response is directly related to the presence of a target-defining property). This study explored the hypothesis that intertrial priming is dependent on the level of ambiguity in a task, with the present/absent task being inherently more ambiguous than the compound search task. The first three of five experiments further established the dissociation between the tasks and excluded alternative explanations. Intertrial priming was strong in present/absent and go/no-go tasks, but absent in compound and compound/absent tasks. The last two experiments supported the ambiguity hypothesis by introducing more uncertainty in the compound task, after which intertrial priming returned. © 2006 Psychology Press Ltd

    Transfer of information into working memory during attentional capture

    Get PDF
    Previous research has shown that task-irrelevant onsets can capture spatial attention even when attending to the onset is inconsistent with our intentions. The present study investigated whether information acquired during attentional capture is transferred into working memory. To measure whether this is the case, 25% of visual search trials were followed by a distractor recognition task. The results showed that the onset letter was recognized more often than a nononset letter. In addition, the magnitude of attentional capture was positively correlated with the onset letter recognition advantage. The results suggest that attentional capture results in transfer of information into working memory

    Complex Visual Imagery and Cognition During Near-Death Experiences

    Get PDF
    Near-death experiences (NDEs) entail complex and structured conscious experience during conditions known to coincide with rapid loss of consciousness often associated with decline or disruption of the neurological correlates currently held to be causative factors of visual imagery and cognition. In this study, 653 NDE reports of cardiac and/or respiratory arrest patients were analyzed for unprompted, spontaneous references to quality of conscious visual imagery and mentation during an NDE. Results indicate that in a majority of NDEs, both figurative and abstract mentation are either preserved or markedly improved during unconsciousness and unresponsiveness in the context of respiratory and cardiac arrests. These findings underscore the call to further study the mechanisms behind the ‘outliving’ of a conscious sense of selfhood and complex, structured visual imagery and cognition during severely deteriorating physiological function—and perhaps especially during clinical death

    Neural blackboard architectures of combinatorial structures in cognition

    Get PDF
    Human cognition is unique in the way in which it relies on combinatorial (or compositional) structures. Language provides ample evidence for the existence of combinatorial structures, but they can also be found in visual cognition. To understand the neural basis of human cognition, it is therefore essential to understand how combinatorial structures can be instantiated in neural terms. In his recent book on the foundations of language, Jackendoff described four fundamental problems for a neural instantiation of combinatorial structures: the massiveness of the binding problem, the problem of 2, the problem of variables and the transformation of combinatorial structures from working memory to long-term memory. This paper aims to show that these problems can be solved by means of neural ‘blackboard’ architectures. For this purpose, a neural blackboard architecture for sentence structure is presented. In this architecture, neural structures that encode for words are temporarily bound in a manner that preserves the structure of the sentence. It is shown that the architecture solves the four problems presented by Jackendoff. The ability of the architecture to instantiate sentence structures is illustrated with examples of sentence complexity observed in human language performance. Similarities exist between the architecture for sentence structure and blackboard architectures for combinatorial structures in visual cognition, derived from the structure of the visual cortex. These architectures are briefly discussed, together with an example of a combinatorial structure in which the blackboard architectures for language and vision are combined. In this way, the architecture for language is grounded in perception

    Contextual modulation of primary visual cortex by auditory signals

    Get PDF
    Early visual cortex receives non-feedforward input from lateral and top-down connections (Muckli & Petro 2013 Curr. Opin. Neurobiol. 23, 195–201. (doi:10.1016/j.conb.2013.01.020)), including long-range projections from auditory areas. Early visual cortex can code for high-level auditory information, with neural patterns representing natural sound stimulation (Vetter et al. 2014 Curr. Biol. 24, 1256–1262. (doi:10.1016/j.cub.2014.04.020)). We discuss a number of questions arising from these findings. What is the adaptive function of bimodal representations in visual cortex? What type of information projects from auditory to visual cortex? What are the anatomical constraints of auditory information in V1, for example, periphery versus fovea, superficial versus deep cortical layers? Is there a putative neural mechanism we can infer from human neuroimaging data and recent theoretical accounts of cortex? We also present data showing we can read out high-level auditory information from the activation patterns of early visual cortex even when visual cortex receives simple visual stimulation, suggesting independent channels for visual and auditory signals in V1. We speculate which cellular mechanisms allow V1 to be contextually modulated by auditory input to facilitate perception, cognition and behaviour. Beyond cortical feedback that facilitates perception, we argue that there is also feedback serving counterfactual processing during imagery, dreaming and mind wandering, which is not relevant for immediate perception but for behaviour and cognition over a longer time frame. This article is part of the themed issue ‘Auditory and visual scene analysis’

    Development of a head-mounted, eye-tracking system for dogs

    Get PDF
    Growing interest in canine cognition and visual perception has promoted research into the allocation of visual attention during free-viewing tasks in the dog. The techniques currently available to study this (i.e. preferential looking) have, however, lacked spatial accuracy, permitting only gross judgements of the location of the dog’s point of gaze and are limited to a laboratory setting. Here we describe a mobile, head-mounted, video-based, eye-tracking system and a procedure for achieving standardised calibration allowing an output with accuracy of 2-3º. The setup allows free movement of dogs; in addition the procedure does not involve extensive training skills, and is completely non-invasive. This apparatus has the potential to allow the study of gaze patterns in a variety of research applications and could enhance the study of areas such as canine vision, cognition and social interactions
    corecore