55 research outputs found

    How does visual language affect crossmodal plasticity and cochlear implant success?

    Get PDF
    Cochlear implants (CI) are the most successful intervention for ameliorating hearing loss in severely or profoundly deaf children. Despite this, educational performance in children with CI continues to lag behind their hearing peers. From animal models and human neuroimaging studies it has been proposed the integrative functions of auditory cortex are compromised by crossmodal plasticity. This has been argued to result partly from the use of a visual language. Here we argue that 'cochlear implant sensitive periods' comprise both auditory and language sensitive periods, and thus cannot be fully described with animal models. Despite prevailing assumptions, there is no evidence to link the use of a visual language to poorer CI outcome. Crossmodal reorganisation of auditory cortex occurs regardless of compensatory strategies, such as sign language, used by the deaf person. In contrast, language deprivation during early sensitive periods has been repeatedly linked to poor language outcomes. Language sensitive periods have largely been ignored when considering variation in CI outcome, leading to ill-founded recommendations concerning visual language in CI habilitation

    Facilitating Memory for Novel Characters by Reducing Neural Repetition Suppression in the Left Fusiform Cortex

    Get PDF
    Gui Xue is with Beijing Normal University and University of Southern California, Leilei Mei is with Beijing Normal University and University of California Irvine, Chuansheng Chen is with University of California Irvine, Zhong-Lin Lu is with University of Southern California, Russell A. Poldrack is with UT Austin, Qi Dong is with Beijing Normal University.Background -- The left midfusiform and adjacent regions have been implicated in processing and memorizing familiar words, yet its role in memorizing novel characters has not been well understood. Methodology/Principal Findings -- Using functional MRI, the present study examined the hypothesis that the left midfusiform is also involved in memorizing novel characters and spaced learning could enhance the memory by enhancing the left midfusiform activity during learning. Nineteen native Chinese readers were scanned while memorizing the visual form of 120 Korean characters that were novel to the subjects. Each character was repeated four times during learning. Repetition suppression was manipulated by using two different repetition schedules: massed learning and spaced learning, pseudo-randomly mixed within the same scanning session. Under the massed learning condition, the four repetitions were consecutive (with a jittered inter-repetition interval to improve the design efficiency). Under the spaced learning condition, the four repetitions were interleaved with a minimal inter-repetition lag of 6 stimuli. Spaced learning significantly improved participants' performance during the recognition memory test administered one hour after the scan. Stronger left midfusiform and inferior temporal gyrus activities during learning (summed across four repetitions) were associated with better memory of the characters, based on both within- and cross-subjects analyses. Compared to massed learning, spaced learning significantly reduced neural repetition suppression and increased the overall activities in these regions, which were associated with better memory for novel characters. Conclusions/Significance -- These results demonstrated a strong link between cortical activity in the left midfusiform and memory for novel characters, and thus challenge the visual word form area (VWFA) hypothesis. Our results also shed light on the neural mechanisms of the spacing effect in memorizing novel characters.This study was supported by the Program for New Century Excellent Talents in University, the National Science Foundation (grant numbers BCS 0823624 and BCS 0823495), the National Institute of Health (grant number HD057884-01A2), and the 111 Project of China (B07008). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.Psycholog

    Memory Influences Visual Cognition across Multiple Functional States of Interactive Cortical Dynamics

    Get PDF
    No embargo requiredMemory supports a wide range of abilities from categorical perception to goal-directed behavior, such as decision-making and episodic recognition. Memory activates fast and surprisingly accurately and even when information is ambiguous or impoverished (i.e., showing object constancy). This paper proposes the multiple-state interactive (MUSI) account of object cognition that attempts to explain how sensory stimulation activates memory across multiple functional states of neural dynamics, including automatic and strategic mental simulation mechanisms that can ground cognition in modal information processing. A key novel postulate of this account is ‘multiple-function regional activity’: The same neuronal population can contribute to multiple brain states, depending upon the dominant set of inputs at that time. In state 1, the initial fast bottom-up pass through posterior neocortex happens between 95 ms and ~200 ms, with knowledge supporting categorical perception by 120 ms. In state 2, starting around 200 ms, a sustained state of iterative activation of object-sensitive cortex involves bottom-up, recurrent, and feedback interactions with frontoparietal cortex. This supports higher cognitive functions associated with decision-making even under ambiguous or impoverished conditions, phenomenological consciousness, and automatic mental simulation. In the latest state so far identified, state M, starting around 300 to 500 ms, large-scale cortical network interactions, including between multiple networks (e.g., control, salience, and especially default mode), further modulate posterior cortex. This supports elaborated cognition based on earlier processing, including episodic memory, strategic mental simulation, decision evaluation, creativity, and access consciousness. Convergent evidence is reviewed from cognitive neuroscience of object cognition, decision-making, memory, and mental imagery that support this account and define the brain regions and time course of these brain dynamics

    Crohnology

    No full text

    Almost the right word

    No full text

    Motion velocity thresholds in deaf signers: changes in lateralization but not in overall sensitivity

    No full text
    In a series of three experiments, we tested whether deaf native signers process motion velocity information differently from hearing nonsigners. In Experiment 1, participants watched radially moving dots and were asked to detect the quadrant in which the velocity of the dots had changed. Similar 79% thresholds were observed in the two populations. In Experiments 2 and 3, peripheral and central thresholds were assessed separately as previous studies suggest early deafness leads mainly to changes in the processing of visual peripheral information. Neither condition produced an overall population difference. These negative results were not due to a lack of sensitivity in our experiments. Indeed, as has been previously reported, deaf native signers exhibited better thresholds in the right than in the left visual field, whereas the opposite pattern was observed in the hearing. This effect appears triggered by experience with American Sign Language (ASL) rather than deafness per se. Overall, this study confirms that early deafness does not enhance motion processing, and suggests that most of the changes previously described in the literature are instead attributable to changes in attention, and possibly special alterations of attention-to-motion processes

    Impact of early deafness and early exposure to sign language on the cerebral organization for motion processing

    No full text
    This functional magnetic resonance imaging study investigated the impact of early auditory deprivation and/or use of a visuospatial language [American sign language (ASL)] on the organization of neural systems important in visual motion processing by comparing hearing controls with deaf and hearing native signers. Participants monitored moving flowfields under different conditions of spatial and featural attention. Recruitment of the motion-selective area MT-MST in hearing controls was observed to be greater when attention was directed centrally and when the task was to detect motion features, confirming previous reports that the motion network is selectively modulated by different aspects of attention. More importantly, we observed marked differences in the recruitment of motion-related areas as a function of early experience. First, the lateralization of MT-MST was found to shift toward the left hemisphere in early signers, suggesting that early exposure to ASL leads to a greater reliance on the left MT-MST. Second, whereas the two hearing populations displayed more MT-MST activation under central than peripheral attention, the opposite pattern was observed in deaf signers, indicating enhanced recruitment of MT-MST during peripheral attention after early deafness. Third, deaf signers, but neither of the hearing populations, displayed increased activation of the posterior parietal cortex, supporting the view that parietal functions are modified after early auditory deprivation. Finally, only in deaf signers did attention to motion result in enhanced recruitment of the posterior superior temporal sulcus, establishing for the first time in humans that this polymodal area is modified after early sensory deprivation. Together these results highlight the functional and regional specificity of neuroplasticity in humans
    • …
    corecore