4,315 research outputs found

    A feedback model of perceptual learning and categorisation

    Get PDF
    Top-down, feedback, influences are known to have significant effects on visual information processing. Such influences are also likely to affect perceptual learning. This article employs a computational model of the cortical region interactions underlying visual perception to investigate possible influences of top-down information on learning. The results suggest that feedback could bias the way in which perceptual stimuli are categorised and could also facilitate the learning of sub-ordinate level representations suitable for object identification and perceptual expertise

    Improving language mapping in clinical fMRI through assessment of grammar.

    Get PDF
    IntroductionBrain surgery in the language dominant hemisphere remains challenging due to unintended post-surgical language deficits, despite using pre-surgical functional magnetic resonance (fMRI) and intraoperative cortical stimulation. Moreover, patients are often recommended not to undergo surgery if the accompanying risk to language appears to be too high. While standard fMRI language mapping protocols may have relatively good predictive value at the group level, they remain sub-optimal on an individual level. The standard tests used typically assess lexico-semantic aspects of language, and they do not accurately reflect the complexity of language either in comprehension or production at the sentence level. Among patients who had left hemisphere language dominance we assessed which tests are best at activating language areas in the brain.MethodWe compared grammar tests (items testing word order in actives and passives, wh-subject and object questions, relativized subject and object clauses and past tense marking) with standard tests (object naming, auditory and visual responsive naming), using pre-operative fMRI. Twenty-five surgical candidates (13 females) participated in this study. Sixteen patients presented with a brain tumor, and nine with epilepsy. All participants underwent two pre-operative fMRI protocols: one including CYCLE-N grammar tests (items testing word order in actives and passives, wh-subject and object questions, relativized subject and object clauses and past tense marking); and a second one with standard fMRI tests (object naming, auditory and visual responsive naming). fMRI activations during performance in both protocols were compared at the group level, as well as in individual candidates.ResultsThe grammar tests generated more volume of activation in the left hemisphere (left/right angular gyrus, right anterior/posterior superior temporal gyrus) and identified additional language regions not shown by the standard tests (e.g., left anterior/posterior supramarginal gyrus). The standard tests produced more activation in left BA 47. Ten participants had more robust activations in the left hemisphere in the grammar tests and two in the standard tests. The grammar tests also elicited substantial activations in the right hemisphere and thus turned out to be superior at identifying both right and left hemisphere contribution to language processing.ConclusionThe grammar tests may be an important addition to the standard pre-operative fMRI testing

    Change blindness: eradication of gestalt strategies

    Get PDF
    Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task

    What does semantic tiling of the cortex tell us about semantics?

    Get PDF
    Recent use of voxel-wise modeling in cognitive neuroscience suggests that semantic maps tile the cortex. Although this impressive research establishes distributed cortical areas active during the conceptual processing that underlies semantics, it tells us little about the nature of this processing. While mapping concepts between Marr's computational and implementation levels to support neural encoding and decoding, this approach ignores Marr's algorithmic level, central for understanding the mechanisms that implement cognition, in general, and conceptual processing, in particular. Following decades of research in cognitive science and neuroscience, what do we know so far about the representation and processing mechanisms that implement conceptual abilities? Most basically, much is known about the mechanisms associated with: (1) features and frame representations, (2) grounded, abstract, and linguistic representations, (3) knowledge-based inference, (4) concept composition, and (5) conceptual flexibility. Rather than explaining these fundamental representation and processing mechanisms, semantic tiles simply provide a trace of their activity over a relatively short time period within a specific learning context. Establishing the mechanisms that implement conceptual processing in the brain will require more than mapping it to cortical (and sub-cortical) activity, with process models from cognitive science likely to play central roles in specifying the intervening mechanisms. More generally, neuroscience will not achieve its basic goals until it establishes algorithmic-level mechanisms that contribute essential explanations to how the brain works, going beyond simply establishing the brain areas that respond to various task conditions

    Visual selective behavior can be triggered by a feed-forward process

    Get PDF
    The ventral visual pathway implements object recognition and categorization in a hierarchy of processing areas with neuronal selectivities of increasing complexity. The presence of massive feedback connections within this hierarchy raises the possibility that normal visual processing relies on the use of computational loops. It is not known, however, whether object recognition can be performed at all without such loops (i.e., in a purely feed-forward mode). By analyzing the time course of reaction times in a masked natural scene categorization paradigm, we show that the human visual system can generate selective motor responses based on a single feed-forward pass. We confirm these results using a more constrained letter discrimination task, in which the rapid succession of a target and mask is actually perceived as a distractor. We show that a masked stimulus presented for only 26 msec—and often not consciously perceived—can fully determine the earliest selective motor responses: The neural representations of the stimulus and mask are thus kept separated during a short period corresponding to the feed-forward "sweep." Therefore, feedback loops do not appear to be "mandatory" for visual processing. Rather, we found that such loops allow the masked stimulus to reverberate in the visual system and affect behavior for nearly 150 msec after the feed-forward sweep

    Longer fixation duration while viewing face images

    Get PDF
    The spatio-temporal properties of saccadic eye movements can be influenced by the cognitive demand and the characteristics of the observed scene. Probably due to its crucial role in social communication, it is argued that face perception may involve different cognitive processes compared with non-face object or scene perception. In this study, we investigated whether and how face and natural scene images can influence the patterns of visuomotor activity. We recorded monkeys’ saccadic eye movements as they freely viewed monkey face and natural scene images. The face and natural scene images attracted similar number of fixations, but viewing of faces was accompanied by longer fixations compared with natural scenes. These longer fixations were dependent on the context of facial features. The duration of fixations directed at facial contours decreased when the face images were scrambled, and increased at the later stage of normal face viewing. The results suggest that face and natural scene images can generate different patterns of visuomotor activity. The extra fixation duration on faces may be correlated with the detailed analysis of facial features
    • …
    corecore