39 research outputs found

    Native-likeness in second language lexical categorization reflects individual language history and linguistic community norms

    Get PDF
    Second language learners face a dual challenge in vocabulary learning: First, they must learn new names for the hundreds of common objects that they encounter every day. Second, after some time, they discover that these names do not generalize according to the same rules used in their first language. Lexical categories frequently differ between languages (Malt et al., 1999), and successful language learning requires that bilinguals learn not just new words but new patterns for labeling objects. In the present study, Chinese learners of English with varying language histories and resident in two different language settings (Beijing, China and State College, PA, USA) named 67 photographs of common serving dishes (e.g., cups, plates, and bowls) in both Chinese and English. Participants’ response patterns were quantified in terms of similarity to the responses of functionally monolingual native speakers of Chinese and English and showed the cross-language convergence previously observed in simultaneous bilinguals (Ameel et al., 2005). For English, bilinguals’ names for each individual stimulus were also compared to the dominant name generated by the native speakers for the object. Using two statistical models, we disentangle the effects of several highly interactive variables from bilinguals' language histories and the naming norms of the native speaker community to predict inter-personal and inter-item variation in L2 (English) native-likeness. We find only a modest age of earliest exposure effect on L2 category native-likeness, but importantly, we find that classroom instruction in L2 negatively impacts L2 category native-likeness, even after significant immersion experience. We also identify a significant role of both L1 and L2 norms in bilinguals’ L2 picture naming responses

    Time-resolved multivariate pattern analysis of infant EEG data: A practical tutorial

    Get PDF
    Time-resolved multivariate pattern analysis (MVPA), a popular technique for analyzing magneto- and electro-encephalography (M/EEG) neuroimaging data, quantifies the extent and time-course by which neural representations support the discrimination of relevant stimuli dimensions. As EEG is widely used for infant neuroimaging, time-resolved MVPA of infant EEG data is a particularly promising tool for infant cognitive neuroscience. MVPA has recently been applied to common infant imaging methods such as EEG and fNIRS. In this tutorial, we provide and describe code to implement time-resolved, within-subject MVPA with infant EEG data. An example implementation of time-resolved MVPA based on linear SVM classification is described, with accompanying code in Matlab and Python. Results from a test dataset indicated that in both infants and adults this method reliably produced above-chance accuracy for classifying stimuli images. Extensions of the classification analysis are presented including both geometric- and accuracy-based representational similarity analysis, implemented in Python. Common choices of implementation are presented and discussed. As the amount of artifact-free EEG data contributed by each participant is lower in studies of infants than in studies of children and adults, we also explore and discuss the impact of varying participant-level inclusion thresholds on resulting MVPA findings in these datasets

    Data collection instruments

    No full text

    Task Data

    No full text

    MultiChannel Pattern Analysis: Correlation-Based Decoding with fNIRS

    No full text
    Implement multichannel pattern analysis (MCPA) with NIRS a la Emberson, Zinszer, Raizada and Aslin If you want to be alerted of updates to the code, please email Benjamin Zinszer [email protected]. You will only receive emails if there are significant changes made to the code. Alternatively, come back and visit this github repository because changes will be pushed to this location. Two datasets were used in the paper. Dataset #1 has been published in Emberson, Richards & Aslin (2015, control group) and is available in full as part of this repository. Dataset #2 has yet to be published and thus the original dataset is not yet available in this repository. The output of this code (i.e., decoding accuracy) is available for both datasets, along with the code used for implementing the mixed effects models reported in Emberson, Zinszer, Raizada and Aslin is available in a complementary repository https://github.com/laurenemberson/EmbersonZinszerMCPA_analysesFromPaper. If you have questions, please email either Lauren Emberson [email protected] or Benjamin Zinszer [email protected]
    corecore