47 research outputs found

    Implicit learning and individual differences in speech recognition: an exploratory study

    Get PDF
    Individual differences in speech recognition in challenging listening environments are pronounced. Studies suggest that implicit learning is one variable that may contribute to this variability. Here, we explored the unique contributions of three indices of implicit learning to individual differences in the recognition of challenging speech. To this end, we assessed three indices of implicit learning (perceptual, statistical, and incidental), three types of challenging speech (natural fast, vocoded, and speech in noise), and cognitive factors associated with speech recognition (vocabulary, working memory, and attention) in a group of 51 young adults. Speech recognition was modeled as a function of the cognitive factors and learning, and the unique contribution of each index of learning was statistically isolated. The three indices of learning were uncorrelated. Whereas all indices of learning had unique contributions to the recognition of natural-fast speech, only statistical learning had a unique contribution to the recognition of speech in noise and vocoded speech. These data suggest that although implicit learning may contribute to the recognition of challenging speech, the contribution may depend on the type of speech challenge and on the learning task

    Perceptual Anchoring in Preschool Children: Not Adultlike, but There

    Get PDF
    BACKGROUND: Recent studies suggest that human auditory perception follows a prolonged developmental trajectory, sometimes continuing well into adolescence. Whereas both sensory and cognitive accounts have been proposed, the development of the ability to base current perceptual decisions on prior information, an ability that strongly benefits adult perception, has not been directly explored. Here we ask whether the auditory frequency discrimination of preschool children also improves when given the opportunity to use previously presented standard stimuli as perceptual anchors, and whether the magnitude of this anchoring effect undergoes developmental changes. METHODOLOGY/PRINCIPAL FINDINGS: Frequency discrimination was tested using two adaptive same/different protocols. In one protocol (with-reference), a repeated 1-kHz standard tone was presented repeatedly across trials. In the other (no-reference), no such repetitions occurred. Verbal memory and early reading skills were also evaluated to determine if the pattern of correlations between frequency discrimination, memory and literacy is similar to that previously reported in older children and adults. Preschool children were significantly more sensitive in the with-reference than in the no-reference condition, but the magnitude of this anchoring effect was smaller than that observed in adults. The pattern of correlations among discrimination thresholds, memory and literacy replicated previous reports in older children. CONCLUSIONS/SIGNIFICANCE: The processes allowing the use of context to form perceptual anchors are already functional among preschool children, albeit to a lesser extent than in adults. Nevertheless, immature anchoring cannot fully account for the poorer frequency discrimination abilities of young children. That anchoring is present among the majority of typically developing preschool children suggests that the anchoring deficits observed among individuals with dyslexia represent a true deficit rather than a developmental delay

    Musical experience, auditory perception and reading-related skills in children.

    Get PDF
    BACKGROUND: The relationships between auditory processing and reading-related skills remain poorly understood despite intensive research. Here we focus on the potential role of musical experience as a confounding factor. Specifically we ask whether the pattern of correlations between auditory and reading related skills differ between children with different amounts of musical experience. METHODOLOGY/PRINCIPAL FINDINGS: Third grade children with various degrees of musical experience were tested on a battery of auditory processing and reading related tasks. Very poor auditory thresholds and poor memory skills were abundant only among children with no musical education. In this population, indices of auditory processing (frequency and interval discrimination thresholds) were significantly correlated with and accounted for up to 13% of the variance in reading related skills. Among children with more than one year of musical training, auditory processing indices were better, yet reading related skills were not correlated with them. A potential interpretation for the reduction in the correlations might be that auditory and reading-related skills improve at different rates as a function of musical training. CONCLUSIONS/SIGNIFICANCE: Participants' previous musical training, which is typically ignored in studies assessing the relations between auditory and reading related skills, should be considered. Very poor auditory and memory skills are rare among children with even a short period of musical training, suggesting musical training could have an impact on both. The lack of correlation in the musically trained population suggests that a short period of musical training does not enhance reading related skills of individuals with within-normal auditory processing skills. Further studies are required to determine whether the associations between musical training, auditory processing and memory are indeed causal or whether children with poor auditory and memory skills are less likely to study music and if so, why this is the case

    Perceptual learning of time-compressed speech: more than rapid adaptation.

    Get PDF
    BACKGROUND: Time-compressed speech, a form of rapidly presented speech, is harder to comprehend than natural speech, especially for non-native speakers. Although it is possible to adapt to time-compressed speech after a brief exposure, it is not known whether additional perceptual learning occurs with further practice. Here, we ask whether multiday training on time-compressed speech yields more learning than that observed during the initial adaptation phase and whether the pattern of generalization following successful learning is different than that observed with initial adaptation only. METHODOLOGY/PRINCIPAL FINDINGS: Two groups of non-native Hebrew speakers were tested on five different conditions of time-compressed speech identification in two assessments conducted 10-14 days apart. Between those assessments, one group of listeners received five practice sessions on one of the time-compressed conditions. Between the two assessments, trained listeners improved significantly more than untrained listeners on the trained condition. Furthermore, the trained group generalized its learning to two untrained conditions in which different talkers presented the trained speech materials. In addition, when the performance of the non-native speakers was compared to that of a group of naΓ―ve native Hebrew speakers, performance of the trained group was equivalent to that of the native speakers on all conditions on which learning occurred, whereas performance of the untrained non-native listeners was substantially poorer. CONCLUSIONS/SIGNIFICANCE: Multiday training on time-compressed speech results in significantly more perceptual learning than brief adaptation. Compared to previous studies of adaptation, the training induced learning is more stimulus specific. Taken together, the perceptual learning of time-compressed speech appears to progress from an initial, rapid adaptation phase to a subsequent prolonged and more stimulus specific phase. These findings are consistent with the predictions of the Reverse Hierarchy Theory of perceptual learning and suggest constraints on the use of perceptual-learning regimens during second language acquisition

    Stimulus uncertainty in auditory perceptual learning

    Get PDF
    AbstractStimulus uncertainty produced by variations in a target stimulus to be detected or discriminated, impedes perceptual learning under some, but not all experimental conditions. To account for those discrepancies, it has been proposed that uncertainty is detrimental to learning when the interleaved stimuli or tasks are similar to each other but not when they are sufficiently distinct, or when it obstructs the downstream search required to gain access to fine-grained sensory information, as suggested by the Reverse Hierarchy Theory (RHT). The focus of the current review is on the effects of uncertainty on the perceptual learning of speech and non-speech auditory signals. Taken together, the findings from the auditory modality suggest that in addition to the accounts already described, uncertainty may contribute to learning when categorization of stimuli to phonological or acoustic categories is involved. Therefore, it appears that the differences reported between the learning of non-speech and speech-related parameters are not an outcome of inherent differences between those two domains, but rather due to the nature of the tasks often associated with those different stimuli

    Age, Hearing, and the Perceptual Learning of Rapid Speech

    No full text
    The effects of aging and age-related hearing loss on the ability to learn degraded speech are not well understood. This study was designed to compare the perceptual learning of time-compressed speech and its generalization to natural-fast speech across young adults with normal hearing, older adults with normal hearing, and older adults with age-related hearing loss. Early learning (following brief exposure to time-compressed speech) and later learning (following further training) were compared across groups. Age and age-related hearing loss were both associated with declines in early learning. Although the two groups of older adults improved during the training session, when compared to untrained control groups (matched for age and hearing), learning was weaker in older than in young adults. Especially, the transfer of learning to untrained time-compressed sentences was reduced in both groups of older adults. Transfer of learning to natural-fast speech occurred regardless of age and hearing, but it was limited to sentences encountered during training. Findings are discussed within the framework of dynamic models of speech perception and learning. Based on this framework, we tentatively suggest that age-related declines in learning may stem from age differences in the use of high- and low-level speech cues. These age differences result in weaker early learning in older adults, which may further contribute to the difficulty to perceive speech in daily conversational settings in this population

    Learning curves.

    No full text
    <p>Left. Average verification thresholds. Right. Performance consistency. Individual listeners’ data is marked with dashed line. Group mean data is marked with a black line. Error bars are Β±1 SD. The data of one listener with a mean session one threshold of 0.85 are not shown on the figure because they obscure the remaining learning curves. These data are included in all statistical analyses, but removing them had no influence on any of the outcomes.</p

    Normalized threshold differences (NTD) in children and adults.

    No full text
    <p>Negative values indicate an anchoring effect (see text for details). Boxes represent group data (see <a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0019769#pone-0019769-g001" target="_blank">Figure 1</a> for details). Individual data is marked with gray circles.</p
    corecore