128 research outputs found

    Conflict Resolution Ability in Late Bilinguals Improves With Increased Second-Language Proficiency: ANT Evidence

    Get PDF
    Experimental data supporting the claim that bilingual speakers have superior cognitive control abilities are often questioned with respect to certain methodological limitations. One such limitation is the use of between-group design, potentially confounding bilingual status with other factors (e.g., socioeconomic status). Here, we used a homogeneous sample of 57 young adult Russian–English late unbalanced bilinguals who were administrated Attention Network Task (ANT) together with an L2 proficiency task. We tested the correlation of L2 vocabulary performance with conflict and alertness measures and overall reaction times in ANT performance. Overall, participants demonstrated better conflict resolution with the increase in their second language competence, with 8% of variance in conflict resolution explained by L2 proficiency. Our results support the notion of regular correspondence between bilingualism and cognitive control

    Motor (but not auditory) attention affects syntactic choice

    Get PDF
    Understanding the determinants of syntactic choice in sentence production is a salient topic in psycholinguistics. Existing evidence suggests that syntactic choice results from an interplay between linguistic and non-linguistic factors, and a speaker’s attention to the elements of a described event represents one such factor. Whereas multimodal accounts of attention suggest a role for different modalities in this process, existing studies examining attention effects in syntactic choice are primarily based on visual cueing paradigms. Hence, it remains unclear whether attentional effects on syntactic choice are limited to the visual modality or are indeed more general. This issue is addressed by the current study. Native English participants viewed and described line drawings of simple transitive events while their attention was directed to the location of the agent or the patient of the depicted event by means of either an auditory (monaural beep) or a motor (unilateral key press) lateral cue. Our results show an effect of cue location, with participants producing more passive-voice descriptions in the patient-cued conditions. Crucially, this cue location effect emerged in the motor-cue but not (or substantially less so) in the auditory-cue condition, as confirmed by a reliable interaction between cue location (agent vs. patient) and cue type (auditory vs. motor). Our data suggest that attentional effects on the speaker’s syntactic choices are modality-specific and limited to the visual and motor, but not the auditory, domain

    Effects of Visual Priming and Event Orientation on Word Order Choice in Russian Sentence Production

    Get PDF
    Existing research shows that distribution of the speaker’s attention among event’s protagonists affects syntactic choice during sentence production. One of the debated issues concerns the extent of the attentional contribution to syntactic choice in languages that put stronger emphasis on word order arrangement rather than the choice of the overall syntactic frame. To address this, the current study used a sentence production task, in which Russian native speakers were asked to verbally describe visually perceived transitive events. Prior to describing the target event, a visual cue directed the participants’ attention to the location of either the agent or the patient of the subsequently presented visual event. In addition, we also manipulated event orientation (agent-left vs. agent-right) as another potential contributor to syntactic choice. The number of patient-initial sentences was the dependent variable compared between conditions. First, the obtained results replicated the effect of visual cueing on the word order in Russian language: more patient-initial sentences in patient cued condition. Second, we registered a novel effect of event orientation: Russian native speakers produced more patient-initial sentences after seeing events developing from right to left as opposed to left-to-right events. Our study provides new evidence about the role of the speaker’s attention and event orientation in syntactic choice in language with flexible word order

    Effects of Attention on what is known and what is not: MEG Evidence for Functionally Discrete Memory Circuits

    Get PDF
    Recent results obtained with a neural-network model of the language cortex suggest that the memory circuits developing for words are both distributed and functionally discrete. This model makes testable predictions about brain responses to words and pseudowords under variable availability of attentional resources. In particular, due to their strong internal connections, the action-perception circuits for words that the network spontaneously developed exhibit functionally discrete activation dynamics, which are only marginally affected by attentional variations. At the same time, network responses to unfamiliar items – pseudowords – that have not been previously learned (and, therefore, lack corresponding memory representations) exhibit (and predict) strong attention dependence, explained by the different amounts of attentional resources available and, therefore, different degrees of competition between multiple memory circuits partially activated by items lacking lexical traces. We tested these predictions in a novel magnetoencephalography experiment and presented subjects with familiar words and matched unfamiliar pseudowords during attention demanding tasks and under distraction. The magnetic mismatch negativity (MMN) response to words showed relative immunity to attention variations, whereas the MMN to pseudowords exhibited profound variability: when subjects attended the stimuli, the brain response to pseudowords was larger than that to words (as typically observed in the N400); when attention was withdrawn, the opposite pattern emerged, with the response to pseudowords reduced below the response to words. Main cortical sources of these activations were localized to superior-temporal cortex. These results confirm the model's predictions and provide evidence in support of the hypothesis that words are represented in the brain as action-perception circuits that are both discrete and distributed

    Special Theme of the Issue. Neurocognitive Aspects of Language Function and Use [Editorial]

    Get PDF

    Hierarchical structure priming from mathematics to two- and three-site relative clause attachment

    Get PDF
    A number of recent studies found evidence for shared structural representations across different cognitive domains such as mathematics, music, and language. For instance, Scheepers et al. (2011) showed that English speakers’ choices of relative clause (RC) attachments in partial sentences like The tourist guide mentioned the bells of the church that … can be influenced by the structure of previously solved prime equations such as 80–(9 + 1) × 5 (making high RC-attachments more likely) versus 80–9 + 1 × 5 (making low RC-attachments more likely). Using the same sentence completion task, Experiment 1 of the present paper fully replicated this cross-domain structural priming effect in Russian, a morphologically rich language. More interestingly, Experiment 2 extended this finding to more complex three-site attachment configurations and showed that, relative to a structurally neutral baseline prime condition, N1-, N2-, and N3-attachments of RCs in Russian were equally susceptible to structural priming from mathematical equations such as 18+(7+(3 + 11)) × 2, 18 + 7+(3 + 11) × 2, and 18 + 7 + 3 + 11 × 2, respectively. The latter suggests that cross-domain structural priming from mathematics to language must rely on detailed, domain-general representations of hierarchical structure

    Automatic Lexical Access in Visual Modality: Eye-Tracking Evidence

    Get PDF
    Language processing has been suggested to be partially automatic, with some studies suggesting full automaticity and attention independence of at least early neural stages of language comprehension, in particular, lexical access. Existing neurophysiological evidence has demonstrated early lexically specific brain responses (enhanced activation for real words) to orthographic stimuli presented parafoveally even under the condition of withdrawn attention. These studies, however, did not control participants’ eye movements leaving a possibility that they may have foveated the stimuli, leading to overt processing. To address this caveat, we recorded eye movements to words, pseudowords, and non-words presented parafoveally for a short duration while participants performed a dual non-linguistic feature detection task (color combination) foveally, in the focus of their visual attention. Our results revealed very few saccades to the orthographic stimuli or even to their previous locations. However, analysis of post-experimental recall and recognition performance showed above-chance memory performance for the linguistic stimuli. These results suggest that partial lexical access may indeed take place in the presence of an unrelated demanding task and in the absence of overt attention to the linguistic stimuli. As such, our data further inform automatic and largely attention-independent theories of lexical access

    Real-time functional architecture of visual word recognition.

    Get PDF
    Despite a century of research into visual word recognition, basic questions remain unresolved about the functional architecture of the process that maps visual inputs from orthographic analysis onto lexical form and meaning and about the units of analysis in terms of which these processes are conducted. Here we use magnetoencephalography, supported by a masked priming behavioral study, to address these questions using contrasting sets of simple (walk), complex (swimmer), and pseudo-complex (corner) forms. Early analyses of orthographic structure, detectable in bilateral posterior temporal regions within a 150-230 msec time frame, are shown to segment the visual input into linguistic substrings (words and morphemes) that trigger lexical access in left middle temporal locations from 300 msec. These are primarily feedforward processes and are not initially constrained by lexical-level variables. Lexical constraints become significant from 390 msec, in both simple and complex words, with increased processing of pseudowords and pseudo-complex forms. These results, consistent with morpho-orthographic models based on masked priming data, map out the real-time functional architecture of visual word recognition, establishing basic feedforward processing relationships between orthographic form, morphological structure, and lexical meaning.This is the final version of the article. It first appeared from MIT Press via http://dx.doi.org/10.1162/jocn_a_0069

    Judgments of Learning for Words in Vertical Space

    Get PDF
    Close relationship between physical space and internal knowledge representations has received ample support in the literature. For example, location of visually perceived information in vertical space has been shown to affect different numerical judgments. In addition, physical dimensions, such as weight or font size, were shown to affect judgments of learning (JOLs, an estimation of the likelihood that an item will be remembered later, or its perceived memorability). In two experiments we tested the hypothesis that differences in positioning words in vertical space may affect their perceived memorability, i.e., JOLs. In both Experiments, the words were presented in lower or in upper screen locations. In Experiment 1, JOLs were collected in the centre of the screen following word presentation. In Experiment 2, JOLs were collected at the point of word presentation and in the same location. In both experiments participants completed a free recall test. JOLs were compared between different vertically displaced presentation locations. In general, Bayesian analyses showed evidence in support for the null effect of vertical location on JOLs. We interpret our results as indicating that the effects of physical dimensions on JOLs are mediated by subjective importance, information that vertical location alone fails to convey
    corecore