22 research outputs found
Native-likeness in second language lexical categorization reflects individual language history and linguistic community norms
Second language learners face a dual challenge in vocabulary learning: First, they must learn new names for the hundreds of common objects that they encounter every day. Second, after some time, they discover that these names do not generalize according to the same rules used in their first language. Lexical categories frequently differ between languages (Malt et al., 1999), and successful language learning requires that bilinguals learn not just new words but new patterns for labeling objects. In the present study, Chinese learners of English with varying language histories and resident in two different language settings (Beijing, China and State College, PA, USA) named 67 photographs of common serving dishes (e.g., cups, plates, and bowls) in both Chinese and English. Participants’ response patterns were quantified in terms of similarity to the responses of functionally monolingual native speakers of Chinese and English and showed the cross-language convergence previously observed in simultaneous bilinguals (Ameel et al., 2005). For English, bilinguals’ names for each individual stimulus were also compared to the dominant name generated by the native speakers for the object. Using two statistical models, we disentangle the effects of several highly interactive variables from bilinguals' language histories and the naming norms of the native speaker community to predict inter-personal and inter-item variation in L2 (English) native-likeness. We find only a modest age of earliest exposure effect on L2 category native-likeness, but importantly, we find that classroom instruction in L2 negatively impacts L2 category native-likeness, even after significant immersion experience. We also identify a significant role of both L1 and L2 norms in bilinguals’ L2 picture naming responses
Recommended from our members
Chinese and English speakers’ neural representations of word meaning offer adifferent picture of cross-language semantics than corpus and behavioral measures
Speakers of Chinese and English share decodable neuralsemantic representations, which can be elicited by words ineach language. We explore various, common models ofsemantic representation and their correspondences to eachother and to these neural representations. Despite very strongcross-language similarity in the neural data, we find that twoversions of a corpus-based semantic model do not show thesame strong correlation between languages. Behavior-basedmodels better approximate cross-language similarity, butthese models also fail to explain the similarities observed inthe neural data. Although none of the examined modelsexplain cross-language neural similarity, we explore how theymight provide additional information over and above cross-language neural similarity. We find that native speakers’ratings of noun-noun similarity and one of the corpus modelsdo further correlate with neural data after accounting forcross-language similarities
Time-resolved multivariate pattern analysis of infant EEG data: A practical tutorial
Time-resolved multivariate pattern analysis (MVPA), a popular technique for analyzing magneto- and electro-encephalography (M/EEG) neuroimaging data, quantifies the extent and time-course by which neural representations support the discrimination of relevant stimuli dimensions. As EEG is widely used for infant neuroimaging, time-resolved MVPA of infant EEG data is a particularly promising tool for infant cognitive neuroscience. MVPA has recently been applied to common infant imaging methods such as EEG and fNIRS. In this tutorial, we provide and describe code to implement time-resolved, within-subject MVPA with infant EEG data. An example implementation of time-resolved MVPA based on linear SVM classification is described, with accompanying code in Matlab and Python. Results from a test dataset indicated that in both infants and adults this method reliably produced above-chance accuracy for classifying stimuli images. Extensions of the classification analysis are presented including both geometric- and accuracy-based representational similarity analysis, implemented in Python. Common choices of implementation are presented and discussed. As the amount of artifact-free EEG data contributed by each participant is lower in studies of infants than in studies of children and adults, we also explore and discuss the impact of varying participant-level inclusion thresholds on resulting MVPA findings in these datasets
Recommended from our members
Lexical Categorization Variables for English and Mandarin Quantify Ambiguity in Object Naming
This study focuses on naming and rating ambiguously categorized objects to more accurately represent the real-life naming scenarios that language-users encounter. We examined 407 images of common object categories from a previous study collected in 2013 with monolingual undergraduate students in the United States and China. Of those images, we selected 150 that both (1) balanced the mean and range of naming variables between languages and (2) had low between-language correlations on the same variables. We elicited object names and typicality ratings for these 150 images from a more age-, education-, and geographically-diverse sample of English monolinguals across the US. Naming measures were significantly correlated between current and previous English samples, suggesting high consistency of these object categories within language, especially typicality ratings when conditioned on the object name typicality is not a strictly conceptual measure. Cross-language correlations were low, illustrating the unique categorization patterns between languages
Recommended from our members
Deep in the Trenches: First language performance predicts primacy in statistical learning of two structures
While statistical learning is a well-established language learning mechanism, its usefulness in multiple language contexts is more unknown. A phenomenon known as entrenchment has been proposed, in which learning one language prevents the acquisition of a second language in the same speech stream. The observed L1 advantage or primacy effect has been previously mitigated with various cues to the presence of a second structure (L2). The present study manipulates the number of transitions between L1 and L2 to influence entrenchment. One condition was designed to replicate previous findings of entrenchment and the other was designed to overcome entrenchment. We find that adding more transitions between languages did not increase L2 learning, and second language learning is more dependent on the first learned language than on manipulations of the transitions between languages
Recommended from our members
Language Models Show Within- and Cross-language Similarities in Concrete Noun Meaning, but not Differences Between L1 and L2 English Speakers
Monolingual and bilingual speakers of the same languages derive unique meanings for words, partly based on between-language differences in meaning of dictionary-translated words. Do language models also capture this variability between speakers? We compared several models of lexical semantic representation and their correspondences to a word-word meaning similarity rating task done by both L1 and L2 English speakers. We found most language models do not differently correlate with L1 vs. L2 English speakers. Further, these models exhibit more cross-language similarity between Mandarin and English representations than is supported by psycholinguistic research. Only GloVe and OpenAI’s Davinci models more strongly correlated with L1 speakers than L2, but individual participants’ similarity to these models did not relate to language history variables that might otherwise predict bilingual lexical semantic native-likeness. We concluded that language models are not yet reliable references for tracking lexical semantic learning and discuss future directions for computational and psycholinguistics
Recommended from our members
Anticipation Effect after Implicit Distributional Learning
Distributional learning research has established that humanscan track the frequencies of sequentially presented stimuli inorder to infer the probabilities of upcoming events (e.g., Hasher& Zacks, 1984). Here, we set out to explore anticipation of astimulus after implicit distributional learning. We hypothesizethat as people learn the category frequency informationimplicitly, response times will scale according to the relativefrequency of the stimulus category. Twelve adult participantsviewed photographs of faces, tools, and buildings whileperforming a simple classification task. We found that responsetimes significantly decreased with greater frequencies in thedistribution of stimulus categories. This result suggested thatdistributional information about the internal representations ofthe stimuli could be learned and indicated the possibility thatparticipants anticipated the stimuli proportional to theprobability of the category appearing and thereby reducedresponse times for the more frequent categories
Recommended from our members
A computational model of bilingual semantic convergence
Patterns of object naming often differ between languages, but
bilingual speakers develop convergent naming patterns in
their two languages that are distinct from those of
monolingual speakers of each language. This convergence
appears to reflect dynamic interactions between lexical
representations for the two languages. In this study, we
present a self-organizing neural network model to simulate
semantic convergence in the bilingual lexicon and investigate
mechanisms underlying semantic convergence. Our results
demonstrate that connections between two languages can be
established through the simultaneous activations of related
words in both languages, and these connections between two
languages pull the two lexicons toward each other. These
results suggest that connections between words in the
bilingual lexicon play a major role in bilinguals’ semantic
convergence. The model provides a foundation for exploring
how various input variables will affect bilingual patterns of
output
Recommended from our members
Decoding the infant mind: Multivariate pattern analysis (MVPA) using fNIRS.
The MRI environment restricts the types of populations and tasks that can be studied by cognitive neuroscientists (e.g., young infants, face-to-face communication). FNIRS is a neuroimaging modality that records the same physiological signal as fMRI but without the constraints of MRI, and with better spatial localization than EEG. However, research in the fNIRS community largely lacks the analytic sophistication of analogous fMRI work, restricting the application of this imaging technology. The current paper presents a method of multivariate pattern analysis for fNIRS that allows the authors to decode the infant mind (a key fNIRS population). Specifically, multivariate pattern analysis (MVPA) employs a correlation-based decoding method where a group model is constructed for all infants except one; both average patterns (i.e., infant-level) and single trial patterns (i.e., trial-level) of activation are decoded. Between subjects decoding is a particularly difficult task, because each infant has their own somewhat idiosyncratic patterns of neural activation. The fact that our method succeeds at across-subject decoding demonstrates the presence of group-level multi-channel regularities across infants. The code for implementing these analyses has been made readily available online to facilitate the quick adoption of this method to advance the methodological tools available to the fNIRS researcher
Impact of depression on speech perception in noise.
Effective speech communication is critical to everyday quality of life and social well-being. In addition to the well-studied deficits in cognitive and motor function, depression also impacts communication. Here, we examined speech perception in individuals who were clinically diagnosed with major depressive disorder (MDD) relative to neurotypical controls. Forty-two normal-hearing (NH) individuals with MDD and 41 NH neurotypical controls performed sentence recognition tasks across three conditions with maskers varying in the extent of linguistic content (high, low, and none): 1-talker masker (1T), reversed 1-talker masker (1T_tr), and speech-shaped noise (SSN). Individuals with MDD, relative to neurotypical controls, demonstrated lower recognition accuracy in the 1T condition but not in the 1T_tr or SSN condition. To examine the nature of the listening condition-specific speech perception deficit, we analyzed speech recognition errors. Errors as a result of interference from masker sentences were higher for individuals with MDD (vs. neurotypical controls) in the 1T condition. This depression-related listening condition-specific pattern in recognition errors was not observed for other error types. We posit that this depression-related listening condition-specific deficit in speech perception may be related to heightened distractibility due to linguistic interference from background talkers