17,771 research outputs found

    Visual and auditory vocabulary acquisition in learning Chinese as a second language : the impact of modality-specific working memory training

    Get PDF
    The global aim of this thesis was to investigate underlying working memory processes and neural correlates of visual and auditory vocabulary acquisition in Chinese. As an additional question and pre-condition for examining the main goal I questioned whether visual working memory can be trained separately from auditory and whether intra-modal can be distinguished from across-modal training effects in visual working memory on the behavioral and on the neural level. The Working Memory Training Study was designed to test whether visual working memory processes can be trained specifically on the behavioral and neural level and whether those effects can be separated from across-modal training effect. Decidedly larger training gains after visual working memory training compared with auditory or no training on a visual 2-back task were found. These effects were accompanied by specific training-related decreases in the right middle frontal gyrus arising from visual training only. Likewise, visual and auditory training led to decreased activations in the superior portion of the right middle frontal gyrus and the right posterior parietal lobule. I infer that the combination of effects resulted from increased neural efficiency of intra-modal (visual) processes on the one hand and of across-modal (general control) processes on the other hand. Therefore, visual processes of working memory can be trained specifically, and these effects can be functionally dissociated from alterations in general control processes common to both working memory trainings. These results offered a good starting point to use the training paradigm in the Language Training Study. As exemplified for the visual modality, the working memory training paradigm was successful in training a modality-specific process. Thus, the paradigm was applicable to investigate different transfer effects of visual and auditory working memory training on visual and auditory vocabulary learning in Chinese. The Language Training Study aimed at investigating whether visual working memory training exerts unique influence on learning Chinese visual words (orthographic learning) due to the greater complexity of the Chinese writing system, and, conversely, whether auditory working memory training has a specific impact on learning Chinese auditory words (phonological learning). In addition, training induced modulations in language-related brain networks were examined using fMRI in a pretest-training-posttest design. Both working memory trainings led to positive transfer effects on orthographic learning as compared to no training, whereas for phonological learning no transfer effects were obtained. Differential activation changes after visual and auditory working memory training were found in areas engaged in visual and auditory word processing: Activation sustained/decreased after intra-modal (visual) training in the left mid-fusiform gyrus in the orthographic task. Similarly, activation decreased after intra-modal (auditory) training in the anterior insula in the phonological task. These findings are consistent with the view that working memory training in the equivalent modality enhances the efficiency of perceptual encoding in the orthographic task and incorporating novel sound patterns into long-term phonological representations in the phonological task. Surprisingly, activation increases after across-modal training emerged in both tasks within the same brain regions: Activation increased after auditory training in the mid-fusiform gyrus in the orthographic task and likewise after visual training in the anterior insula in the phonological task, suggesting that working memory training in the complementary modality reflects selective attention to the respective tasks presumably guided by modality-unspecific improvements in executive components of working memory. Moreover, visual training led to additional recruitment of brain regions in the orthographic task, i.e. the right precuneus, presumably mirroring the generation of a mental visual image of the to-be-retrieved character.Das Ziel dieser Arbeit war es zugrundeliegende Arbeitsgedächtnisprozesse und neuronale Korrelate visuellen und auditiven Vokabellernens im Chinesischen zu untersuchen. Als zusätzliche Fragestellung und Voraussetzung um der Hauptforschungsfrage nachzugehen wurde außerdem untersucht, ob visuelles Arbeitsgedächtnis separat von auditivem trainiert werden kann und ob intra-modale von cross-modalen Trainingseffekten auf der Verhaltens- und der neuralen Ebene unterschieden werden können. In der Arbeitsgedächtnistrainingsstudie wurde getestet, ob visuelle Arbeitsgedächtnisprozesse auf der Verhaltens- und der neuronalen Ebene spezifisch trainiert werden und ob solche Effekte von cross-modalen Trainingseffekten separiert werden können. In einer visuellen 2-back Aufgabe zeigten sich deutlich größere Leistungszuwächse nach visuellem im Vergleich zu auditivem oder keine Training. Diese Effekte gingen mit spezifischen trainingsinduzierten Aktivierungsreduktionen im rechten Gyrus frontalis medius einher, die nur auf das visuelle Training zurückgingen. In gleicher Weise führten visuelles und auditives Training zu Aktivierungsreduktionen im superioren Teil des rechten Gyrus frontalis medius und im rechten superioren Parietallappen. Ich schließe daraus, dass die Kombination dieser Effekte durch erhöhte neuronale Effizienz intra-modaler (visueller) Prozesse einerseits und cross-modaler (genereller Kontroll-) Prozesse andererseits zustande kamen. Folglich scheinen visuelle Arbeitsgedächtnisprozesse spezifisch trainierbar zu sein und können funktionell von Veränderungen in generellen Kontrollprozessen, die beide Arbeitsgedächtnistraining gleichermaßen zeigten, dissoziiert werden. Diese Ergebnisse waren einen gute Ausgangsbasis, um das Trainingsparadigma in der Sprachtrainingsstudie anzuwenden. Wie für die visuelle Modalität exemplarisch dargestellt, konnte mit dem Arbeitsgedächtnisparadigma erfolgreich ein modalitäts-spezifischer Prozess trainiert werden. Demnach war das Paradigma geeignet um unterschiedliche Transfereffekte des visuellen und auditiven Arbeitsgedächtnistrainings auf visuelles und auditives Vokabellernen im Chinesischen zu untersuchen. Mit der Sprachtrainingsstudie sollte untersucht werden, ob visuelles Arbeitsgedächtnistraining spezifisch das Lernen chinesischer visueller Wörter (orthographisches Lernen) aufgrund der hohen Komplexität des chinesischen Schriftsystems beeinflusst und in umgekehrter Weise, ob auditives Arbeitsgedächtnistraining spezifisch das Lernen chinesischer auditiver Wörter (phonologisches Lernen) verbessert. Darüber hinaus wurden trainingsinduzierte Aktivierungsveränderungen in sprachrelevanten Gehirnnetzwerken mittels funktioneller Magnetresonanztomographie in einem Pretest-Training-Posttest-Design untersucht. Beide Arbeitsgedächtnistrainings führten zu positiven Transfereffekten beim orthographischen Lernen im Vergleich zu keinem Arbeitsgedächtnistraining, während für phonologisches Lernen keine Transfereffekte gefunden wurden. Unterschiedliche Aktivierungsveränderungen nach visuellem und auditivem Arbeitsgedächtnistraining wurden in Arealen gefunden, die auch bei visueller und auditiver Wortverarbeitung rekrutiert werden: Die Aktivierung wurde nach intra-modalem (visuellem) Training in der orthographischen Aufgabe im linken Gyrus fusiformis aufrechterhalten bzw. fiel ab. In ähnlicher Weise fiel die Aktivierung nach intra-modalem (auditiven) Training in der phonologischen Aufgabe in der linken anterioren Insel ab. Diese Ergebnisse weisen darauf hin, dass Arbeitsgedächtnistraining in der äquivalenten Modalität die Effizienz perzeptuellen Enkodieren in der orthographischen Aufgabe und die Einspeicherung neuer Klangmuster in Langzeitgedächtnisrepräsentation in der phonologischen Aufgabe verstärkt hat. Überraschenderweise wurden Aktivierungserhöhungen nach cross-modalem Training innerhalb derselben Gehirnregionen gefunden: Die Aktivierung im Gyrus fusiformis nahm nach auditivem Training in der orthographischen Aufgabe zu genauso wie die Aktivierung in der anterioren Insel nach visuellem Training in der phonologischen Aufgabe. Dies weist darauf hin, dass Arbeitsgedächtnistraining in der komplementären Modalität zu selektiver Aufmerksamkeitsallokation auf die entsprechende Aufgabe führt, was vermutlich auf modalitätsunspezifische Verbesserungen in exekutiven Arbeitsgedächtnisprozessen zurückzuführen ist. Darüber hinaus führte visuelles Arbeitsgedächtnistraining zu einer zusätzlichen Rekrutierung von Hirnarealen in der orthographischen Aufgabe, dem rechten Precuneus, was wahrscheinlich die Generierung eines visuellen mentalen Bildes des abzurufenden Schriftzeichens widerspiegelt

    A psychology literature study on modality related issues for multimodal presentation in crisis management

    Get PDF
    The motivation of this psychology literature study is to obtain modality related guidelines for real-time information presentation in crisis management environment. The crisis management task is usually companied by time urgency, risk, uncertainty, and high information density. Decision makers (crisis managers) might undergo cognitive overload and tend to show biases in their performances. Therefore, the on-going crisis event needs to be presented in a manner that enhances perception, assists diagnosis, and prevents cognitive overload. To this end, this study looked into the modality effects on perception, cognitive load, working memory, learning, and attention. Selected topics include working memory, dual-coding theory, cognitive load theory, multimedia learning, and attention. The findings are several modality usage guidelines which may lead to more efficient use of the user’s cognitive capacity and enhance the information perception

    Unimodal and cross-modal prediction is enhanced in musicians

    Get PDF
    Musical training involves exposure to complex auditory and visual stimuli, memorization of elaborate sequences, and extensive motor rehearsal. It has been hypothesized that such multifaceted training may be associated with differences in basic cognitive functions, such as prediction, potentially translating to a facilitation in expert musicians. Moreover, such differences might generalize to non-auditory stimuli. This study was designed to test both hypotheses. We implemented a cross-modal attentional cueing task with auditory and visual stimuli, where a target was preceded by compatible or incompatible cues in mainly compatible (80% compatible, predictable) or random blocks (50% compatible, unpredictable). This allowed for the testing of prediction skills in musicians and controls. Musicians showed increased sensitivity to the statistical structure of the block, expressed as advantage for compatible trials (disadvantage for incompatible trials), but only in the mainly compatible (predictable) blocks. Controls did not show this pattern. The effect held within modalities (auditory, visual), across modalities, and when controlling for short-term memory capacity. These results reveal a striking enhancement in cross-modal prediction in musicians in a very basic cognitive task

    Integrating visual and tactile information in the perirhinal cortex

    Get PDF
    By virtue of its widespread afferent projections, perirhinal cortex is thought to bind polymodal information into abstract object-level representations. Consistent with this proposal, deficits in cross-modal integration have been reported after perirhinal lesions in nonhuman primates. It is therefore surprising that imaging studies of humans have not observed perirhinal activation during visual–tactile object matching. Critically, however, these studies did not differentiate between congruent and incongruent trials. This is important because successful integration can only occur when polymodal information indicates a single object (congruent) rather than different objects (incongruent). We scanned neurologically intact individuals using functional magnetic resonance imaging (fMRI) while they matched shapes. We found higher perirhinal activation bilaterally for cross-modal (visual–tactile) than unimodal (visual–visual or tactile–tactile) matching, but only when visual and tactile attributes were congruent. Our results demonstrate that the human perirhinal cortex is involved in cross-modal, visual–tactile, integration and, thus, indicate a functional homology between human and monkey perirhinal cortices

    N270 sensitivity to conflict strength and working memory: A combined ERP and sLORETA study

    Get PDF
    The event-related potential N270 component is known to be an electrophysiological marker of the supramodal conflict processing. However little is know about the factors that may modulate its amplitude. In particular, among all studies that have investigated the N270, little or no control of the conflict strength and of the load in working memory have been done leaving a lack in the understanding of this component. We designed a spatial audiovisual conflict task with simultaneous target and cross-modal distractor to evaluate the N270 sensitivity to the conflict strength (i.e., visual target with auditory distractor or auditory target with visual distractor) and the load in working memory (goal task maintenance with frequent change in the target modality). In a first session, participants had to focus on one modality for the target position to be considered (left-hand or right-hand) while the distractor could be at the same side (compatible) or at opposite side (incompatible). In a second session, we used the same set of stimuli as in the first session with an additional distinct auditory signal that clued the participants to frequently switch between the auditory and the visual targets. We found that (1) reaction times and N270 amplitudes for conflicting situations were larger within the auditory target condition compared to the visual one, (2) the increase in target maintenance effort led to equivalent increase of both reaction times and N270 amplitudes within all conditions and (3) the right dorsolateral prefrontal cortex current density was higher for both conflicting and active maintenance of the target situations. These results provide new evidence that the N270 component is an electrophysiological marker of the supramodal conflict processing that is sensitive to the conflict strength and that conflict processing and active maintenance of the task goal are two functions of a common executive attention system

    Telephone conversation impairs sustained visual attention via a central bottleneck

    Get PDF
    Recent research has shown that holding telephone conversations disrupts one's driving ability. We asked whether this effect could be attributed to a visual attention impairment. In Experiment 1, participants conversed on a telephone or listened to a narrative while engaged in multiple object tracking (MOT), a task requiring sustained visual attention. We found that MOT was disrupted in the telephone conversation condition, relative to single-task MOT performance, but that listening to a narrative had no effect. In Experiment 2, we asked which component of conversation might be interfering with MOT performance. We replicated the conversation and single-task conditions of Experiment 1 and added two conditions in which participants heard a sequence of words over a telephone. In the shadowing condition, participants simply repeated each word in the sequence. In the generation condition, participants were asked to generate a new word based on each word in the sequence. Word generation interfered with MOT performance, but shadowing did not. The data indicate that telephone conversation disrupts attention at a central stage, the act of generating verbal stimuli, rather than at a peripheral stage, such as listening or speaking

    Modality effects in implicit artificial grammar learning: An EEG study

    Get PDF
    Recently, it has been proposed that sequence learning engages a combination of modality-specific operating networks and modality-independent computational principles. In the present study, we compared the behavioural and EEG outcomes of implicit artificial grammar learning in the visual vs. auditory modality. We controlled for the influence of surface characteristics of sequences (Associative Chunk Strength), thus focusing on the strictly structural aspects of sequence learning, and we adapted the paradigms to compensate for known frailties of the visual modality compared to audition (temporal presentation, fast presentation rate). The behavioural outcomes were similar across modalities. Favouring the idea of modality-specificity, ERPs in response to grammar violations differed in topography and latency (earlier and more anterior component in the visual modality), and ERPs in response to surface features emerged only in the auditory modality. In favour of modality-independence, we observed three common functional properties in the late ERPs of the two grammars: both were free of interactions between structural and surface influences, both were more extended in a grammaticality classification test than in a preference classification test, and both correlated positively and strongly with theta event-related-synchronization during baseline testing. Our findings support the idea of modality-specificity combined with modality-independence, and suggest that memory for visual vs. auditory sequences may largely contribute to cross-modal differences. (C) 2018 Elsevier B.V. All rights reserved.Max Planck Institute for Psycholinguistics; Donders Institute for Brain, Cognition and Behaviour; Fundacao para a Ciencia e Tecnologia [PTDC/PSI-PC0/110734/2009, UID/BIM/04773/2013, CBMR 1334, PEst-OE/EQB/1A0023/2013, UM/PSI/00050/2013

    Hemispheric specialization in selective attention and short-term memory: a fine-coarse model of left- and right-ear disadvantages.

    Get PDF
    Serial short-term memory is impaired by irrelevant sound, particularly when the sound changes acoustically. This acoustic effect is larger when the sound is presented to the left compared to the right ear (a left-ear disadvantage). Serial memory appears relatively insensitive to distraction from the semantic properties of a background sound. In contrast, short-term free recall of semantic-category exemplars is impaired by the semantic properties of background speech and is relatively insensitive to the sound’s acoustic properties. This semantic effect is larger when the sound is presented to the right compared to the left ear (a right-ear disadvantage). In this paper, we outline a speculative neurocognitive fine-coarse model of these hemispheric differences in relation to short-term memory and selective attention, and explicate empirical directions in which this model can be critically evaluated
    corecore