96 research outputs found

    The Timecourse of Generalization in Phonotactic Learning

    Get PDF
    There is considerable evidence that speakers show sensitivity to the phonotactic patterns of their language. These patterns can involve specific sound sequences (e.g. the consonant combination b-b) or more general classes of sequences (e.g. two identical consonants). In some models of phonotactic learning, generalizations can only be formed once some of their specific instantiations have been acquired (the specific-before-general assumption). To test this assumption, we designed an artificial language with both general and specific phonotactic patterns, and gave participants different amounts of exposure to the language. Contrary to the predictions of specific-before-general models, the general pattern required less exposure to be learned than did its specific instantiations. These results are most straightforwardly predicted by learning models that learn general and specific patterns simultaneously. We discuss the importance of modeling learners' sensitivity to the amount of evidence supporting each phonotactic generalization, and show how specific-before-general models can be adapted to accommodate the results

    Reverse production effect: Children recognize novel words better when they are heard rather than produced

    Get PDF
    This is the peer reviewed version of the following article: Tania S. Zamuner, Stephanie Strahm, Elizabeth Morin-Lessard, and Michael P. A. Page, 'Reverse production effect: children recognize novel words better when they are heard rather than produced', Developmental Science, which has been published in final form at DOI 10.1111/desc.12636. Under embargo until 15 November 2018. This article may be used for non-commercial purposes in accordance with Wiley Terms and Conditions for Self-Archiving.This research investigates the effect of production on 4.5- to 6-year-old children’s recognition of newly learned words. In Experiment 1, children were taught four novel words in a produced or heard training condition during a brief training phase. In Experiment 2, children were taught eight novel words, and this time training condition was in a blocked design. Immediately after training, children were tested on their recognition of the trained novel words using a preferential looking paradigm. In both experiments, children recognized novel words that were produced and heard during training, but demonstrated better recognition for items that were heard. These findings are opposite to previous results reported in the literature with adults and children. Our results show that benefits of speech production for word learning are dependent on factors such as task complexity and the developmental stage of the learner.Peer reviewedFinal Accepted Versio

    A visual M170 effect of morphological complexity

    Get PDF
    Recent masked priming studies on visual word recognition have suggested that morphological decomposition is performed prelexically, purely on the basis of the orthographic properties of the word form. Given this, one might expect morphological complexity to modulate early visual evoked activity in electromagnetic measures. We investigated the neural bases of morphological decomposition with magnetoencephalography (MEG). In two experiments, we manipulated morphological complexity in single word lexical decision without priming, once using suffixed words and once using prefixed words. We found that morphologically complex forms display larger amplitudes in the M170, the same component that has been implicated for letterstring and face effects in previous MEG studies. Although letterstring effects have been reported to be left-lateral, we found a right-lateral effect of morphological complexity, suggesting that both hemispheres may be involved in early analysis of word forms

    11-month-olds' knowledge of how familiar words sounds

    Get PDF
    During the first year of life, infants' perception of speech becomes tuned to the phonology of the native language, as revealed in laboratory discrimination and categorization tasks using syllable stimuli. However, the implications of these results for the development of the early vocabulary remain controversial, with some results suggesting that infants retain only vague, sketchy phonological representations of words. Five experiments using a preferential listening procedure tested Dutch 11-month-olds' responses to word, nonword and mispronounced-word stimuli. Infants listened longer to words than nonwords, but did not exhibit this response when words were mispronounced at onset or at offset. In addition, infants preferred correct pronunciations to onset mispronunciations. The results suggest that infants' encoding of familiar words includes substantial phonological detail

    The Influence of the Phonological Neighborhood Clustering-Coefficient on Spoken Word Recognition

    Get PDF
    This article may not exactly replicate the final version published in the APA journal. It is not the copy of record.Clustering coefficient—a measure derived from the new science of networks—refers to the proportion of phonological neighbors of a target word that are also neighbors of each other. Consider the words bat, hat, and can, all of which are neighbors of the word cat; the words bat and hat are also neighbors of each other. In a perceptual identification task, words with a low clustering coefficient (i.e., few neighbors are neighbors of each other) were more accurately identified than words with a high clustering coefficient (i.e., many neighbors are neighbors of each other). In a lexical decision task, words with a low clustering coefficient were responded to more quickly than words with a high clustering coefficient. These findings suggest that the structure of the lexicon, that is the similarity relationships among neighbors of the target word measured by clustering coefficient, influences lexical access in spoken word recognition. Simulations of the TRACE and Shortlist models of spoken word recognition failed to account for the present findings. A framework for a new model of spoken word recognition is proposed

    The differential roles of lexical and sublexical processing during spoken-word recognition in clear and in noise

    Get PDF
    Successful spoken-word recognition relies on an interplay between lexical and sublexical processing. Previous research demonstrated that listeners readily shift between more lexically-biased and more sublexically-biased modes of processing in response to the situational context in which language comprehension takes place. Recognizing words in the presence of background noise reduces the perceptual evidence for the speech signal and – compared to the clear – results in greater uncertainty. It has been proposed that, when dealing with greater uncertainty, listeners rely more strongly on sublexical processing. The present study tested this proposal using behavioral and electroencephalography (EEG) measures. We reasoned that such an adjustment would be reflected in changes in the effects of variables predicting recognition performance with loci at lexical and sublexical levels, respectively. We presented native speakers of Dutch with words featuring substantial variability in (1) word frequency (locus at lexical level), (2) phonological neighborhood density (loci at lexical and sublexical levels) and (3) phonotactic probability (locus at sublexical level). Each participant heard each word in noise (presented at one of three signal-to-noise ratios) and in the clear and performed a two-stage lexical decision and transcription task while EEG was recorded. Using linear mixed-effects analyses, we observed behavioral evidence that listeners relied more strongly on sublexical processing when speech quality decreased. Mixed-effects modelling of the EEG signal in the clear condition showed that sublexical effects were reflected in early modulations of ERP components (e.g., within the first 300 ms post word onset). In noise, EEG effects occurred later and involved multiple regions activated in parallel. Taken together, we found evidence – especially in the behavioral data – supporting previous accounts that the presence of background noise induces a stronger reliance on sublexical processing

    Learning consequences of derived-environment effects

    Get PDF
    In constraint-based phonological models, it is hypothesized that learning phonotactics first should facilitate the learning of phonological alternations. In this paper, we investigate whether alternation learning is impeded if static phonotactic generalizations and dynamic generalizations about alternations mismatch as in derived-environment patterns. English speakers were trained on one of two artificial languages, one in which static and dynamic generalizations match (Across-the-board), the other where they did not (Derived-environment). In both languages, there was an alternation that palatalized [ti] and [di] to [ʧi] and [ʀi] respectively across a morpheme boundary. In the Across-the-board language, the constraint motivating this (*Ti) was true across-the-board, whereas words with such sequences within stems were attested in the Derived-environment language. Results indicate that alternation learning in both languages was comparable. Interestingly, learners in the Across-the-board language failed to infer the *Ti constraint despite never hearing words with such sequences in training. Overall, our results suggests that alternation learning is not hindered by a static phonotactic mismatch in this type of experimental paradigm and that learners do not readily extend a generalization about legal heteromorphemic sequences to analogous sequences within a morpheme

    Phonological deficits in dyslexia impede lexical processing of spoken words : Linking behavioural and MEG data

    Get PDF
    Acknowledgements We thank Nicola Molinaro for sharing the data and input on the conceptualisation of the study, Olaf Hauk for assistance with the ERRC methods, Efthymia Kapnoula for help with the auditory stimuli, Mirjana Bozic and Brechtje Post for feedback on the manuscript, and Manuel Carreiras, Marıa Paz Suarez Coalla and Fernando Cuetos for the recruitment of participants.Peer reviewe
    • 

    corecore