77 research outputs found

    Methods for Minimizing the Confounding Effects of Word Length in the Analysis of Phonotactic Probability and Neighborhood Density

    Get PDF
    This is the author's accepted manuscript. The original is available at http://jslhr.pubs.asha.org/article.aspx?articleid=1781521&resultClick=3Recent research suggests that phonotactic probability (the likelihood of occurrence of a sound sequence) and neighborhood density (the number of words phonologically similar to a given word) influence spoken language processing and acquisition across the lifespan in both normal and clinical populations. The majority of research in this area has tended to focus on controlled laboratory studies rather than naturalistic data such as spontaneous speech samples or elicited probes. One difficulty in applying current measures of phonotactic probability and neighborhood density to more naturalistic samples is the significant correlation between these variables and word length. This study examines several alternative transformations of phonotactic probability and neighborhood density as a means of reducing or eliminating this correlation with word length. Computational analyses of the words in a large database and reanalysis of archival data supported the use of z scores for the analysis of phonotactic probability as a continuous variable and the use of median transformation scores for the analysis of phonotactic probability as a dichotomous variable. Neighborhood density results were less clear with the conclusion that analysis of neighborhood density as a continuous variable warrants further investigation to differentiate the utility of z scores in comparison to median transformation scores. Furthermore, balanced dichotomous coding of neighborhood density was difficult to achieve, suggesting that analysis of neighborhood density as a dichotomous variable should be approached with caution. Recommendations for future application and analyses are discussed

    Primitive computations in speech processing

    Get PDF
    Previous research suggests that artificial-language learners exposed to quasi-continuous speech can learn that the first and the last syllables of words have to belong to distinct classes (e.g., Endress & Bonatti, 2007; Peña, Bonatti, Nespor, & Mehler, 2002). The mechanisms of these generalizations, however, are debated. Here we show that participants learn such generalizations only when the crucial syllables are in edge positions (i.e., the first and the last), but not when they are in medial positions (i.e., the second and the fourth in pentasyllabic items). In contrast to the generalizations, participants readily perform statistical analyses also in word middles. In analogy to sequential memory, we suggest that participants extract the generalizations using a simple but specific mechanism that encodes the positions of syllables that occur in edges. Simultaneously, they use another mechanism to track the syllable distribution in the speech streams. In contrast to previous accounts, this model explains why the generalizations are faster than the statistical computations, require additional cues, and break down under different conditions, and why they can be performed at all. We also show that that similar edge-based mechanisms may explain many results in artificial-grammar learning and also various linguistic observations

    Evidence for multiple rhythmic skills

    Get PDF
    Rhythms, or patterns in time, play a vital role in both speech and music. Proficiency in a number of rhythm skills has been linked to language ability, suggesting that certain rhythmic processes in music and language rely on overlapping resources. However, a lack of understanding about how rhythm skills relate to each other has impeded progress in understanding how language relies on rhythm processing. In particular, it is unknown whether all rhythm skills are linked together, forming a single broad rhythmic competence, or whether there are multiple dissociable rhythm skills. We hypothesized that beat tapping and rhythm memory/sequencing form two separate clusters of rhythm skills. This hypothesis was tested with a battery of two beat tapping and two rhythm memory tests. Here we show that tapping to a metronome and the ability to adjust to a changing tempo while tapping to a metronome are related skills. The ability to remember rhythms and to drum along to repeating rhythmic sequences are also related. However, we found no relationship between beat tapping skills and rhythm memory skills. Thus, beat tapping and rhythm memory are dissociable rhythmic aptitudes. This discovery may inform future research disambiguating how distinct rhythm competencies track with specific language functions
    • …
    corecore