210 research outputs found
Lexicality and frequency in specific language impairment: accuracy and error data from two nonword repetition tests
Purpose: Deficits in phonological working memory and deficits in phonological processing have both been considered potential explanatory factors in Specific Language Impairment (SLI). Manipulations of the lexicality and phonotactic frequency of nonwords enable contrasting predictions to be derived from these hypotheses. Method: 18 typically developing (TD) children and 18 children with SLI completed an assessment battery that included tests of language ability, non-verbal intelligence, and two nonword repetition tests that varied in lexicality and frequency. Results: Repetition accuracy showed that children with SLI were unimpaired for short and simple high lexicality nonwords, whereas clear impairments were shown for all low lexicality nonwords. For low lexicality nonwords, greater repetition accuracy was seen for nonwords constructed from high over low frequency phoneme sequences. Children with SLI made the same proportion of errors that substituted a nonsense syllable for a lexical item as TD children, and this was stable across nonword length. Conclusions: The data show support for a phonological processing deficit in children with SLI, where long-term lexical and sub-lexical phonological knowledge mediate the interpretation of nonwords. However, the data also suggest that while phonological processing may provide a key explanation of SLI, a full account is likely to be multi-faceted
Methods for Minimizing the Confounding Effects of Word Length in the Analysis of Phonotactic Probability and Neighborhood Density
This is the author's accepted manuscript. The original is available at http://jslhr.pubs.asha.org/article.aspx?articleid=1781521&resultClick=3Recent research suggests that phonotactic probability (the likelihood of occurrence of a sound sequence) and neighborhood density (the number of words phonologically similar to a given word) influence spoken language processing and acquisition across the lifespan in both normal and clinical populations. The majority of research in this area has tended to focus on controlled laboratory studies rather than naturalistic data such as spontaneous speech samples or elicited probes. One difficulty in applying current measures of phonotactic probability and neighborhood density to more naturalistic samples is the significant correlation between these variables and word length. This study examines several alternative transformations of phonotactic probability and neighborhood density as a means of reducing or eliminating this correlation with word length. Computational analyses of the words in a large database and reanalysis of archival data supported the use of z scores for the analysis of phonotactic probability as a continuous variable and the use of median transformation scores for the analysis of phonotactic probability as a dichotomous variable. Neighborhood density results were less clear with the conclusion that analysis of neighborhood density as a continuous variable warrants further investigation to differentiate the utility of z scores in comparison to median transformation scores. Furthermore, balanced dichotomous coding of neighborhood density was difficult to achieve, suggesting that analysis of neighborhood density as a dichotomous variable should be approached with caution. Recommendations for future application and analyses are discussed
Primitive computations in speech processing
Previous research suggests that artificial-language learners exposed to quasi-continuous speech can learn that the first and the last syllables of words have to belong to distinct classes (e.g., Endress & Bonatti, 2007; Peña, Bonatti, Nespor, & Mehler, 2002). The mechanisms of these generalizations, however, are debated. Here we show that participants learn such generalizations only when the crucial syllables are in edge positions (i.e., the first and the last), but not when they are in medial positions (i.e., the second and the fourth in pentasyllabic items). In contrast to the generalizations, participants readily perform statistical analyses also in word middles. In analogy to sequential memory, we suggest that participants extract the generalizations using a simple but specific mechanism that encodes the positions of syllables that occur in edges. Simultaneously, they use another mechanism to track the syllable distribution in the speech streams. In contrast to previous accounts, this model explains why the generalizations are faster than the statistical computations, require additional cues, and break down under different conditions, and why they can be performed at all. We also show that that similar edge-based mechanisms may explain many results in artificial-grammar learning and also various linguistic observations
Time-Warp–Invariant Neuronal Processing
A biophysical mechanism acting in auditory neurons allows the brain to process the high variability of speaking rates in natural speech in a time-warp-invariant manner
The evolution of language: a comparative review
For many years the evolution of language has been seen as a disreputable topic, mired in fanciful "just so stories" about language origins. However, in the last decade a new synthesis of modern linguistics, cognitive neuroscience and neo-Darwinian evolutionary theory has begun to make important contributions to our understanding of the biology and evolution of language. I review some of this recent progress, focusing on the value of the comparative method, which uses data from animal species to draw inferences about language evolution. Discussing speech first, I show how data concerning a wide variety of species, from monkeys to birds, can increase our understanding of the anatomical and neural mechanisms underlying human spoken language, and how bird and whale song provide insights into the ultimate evolutionary function of language. I discuss the ‘‘descended larynx’ ’ of humans, a peculiar adaptation for speech that has received much attention in the past, which despite earlier claims is not uniquely human. Then I will turn to the neural mechanisms underlying spoken language, pointing out the difficulties animals apparently experience in perceiving hierarchical structure in sounds, and stressing the importance of vocal imitation in the evolution of a spoken language. Turning to ultimate function, I suggest that communication among kin (especially between parents and offspring) played a crucial but neglected role in driving language evolution. Finally, I briefly discuss phylogeny, discussing hypotheses that offer plausible routes to human language from a non-linguistic chimp-like ancestor. I conclude that comparative data from living animals will be key to developing a richer, more interdisciplinary understanding of our most distinctively human trait: language
- …