96 research outputs found

    Development of audiovisual comprehension skills in prelingually deaf children with cochlear implants

    Get PDF
    Objective: The present study investigated the development of audiovisual comprehension skills in prelingually deaf children who received cochlear implants. Design: We analyzed results obtained with the Common Phrases (Robbins et al., 1995) test of sentence comprehension from 80 prelingually deaf children with cochlear implants who were enrolled in a longitudinal study, from pre-implantation to 5 years after implantation. Results: The results revealed that prelingually deaf children with cochlear implants performed better under audiovisual (AV) presentation compared with auditory-alone (A-alone) or visual-alone (V-alone) conditions. AV sentence comprehension skills were found to be strongly correlated with several clinical outcome measures of speech perception, speech intelligibility, and language. Finally, pre-implantation V-alone performance on the Common Phrases test was strongly correlated with 3-year postimplantation performance on clinical outcome measures of speech perception, speech intelligibility, and language skills. Conclusions: The results suggest that lipreading skills and AV speech perception reflect a common source of variance associated with the development of phonological processing skills that is shared among a wide range of speech and language outcome measures

    The Effect of Hearing Impairment on Word Processing of Infant- and Adult-Directed Speech

    Get PDF
    Objective. Little is known about how children with hearing loss (CHL) process words. The Emergent Coalition Model (ECM) of early word learning proposes that multiple cues (e.g., perceptual, social, linguistic) are used to facilitate word learning. Because hearing loss influences speech perception, different word learning patterns may emerge in CHL relative to children with normal hearing (CNH). One perceptual cue used by young children to access word learning is infant-directed-speech (IDS). Specifically, twenty-one month-olds can learn words in IDS but not in adult directed speech (ADS); however, by 27 months children can learn words in ADS. Currently, it is unknown how CHL process words in IDS and ADS. This study examined how CHL and CNH process familiar and novel words in IDS and ADS. A Looking-WhileListening paradigm was used. We predicted that: 1) CNH would show faster reaction time (RT) and higher accuracy than CHL, 2) word processing may show different patterns for familiar versus novel words, and 3) vocabulary size would be correlated with word processing skills. Methods. Eleven children with bilateral, sensorineural hearing loss (M=32.48 months) using hearing aids or cochlear implants, and 11 CNH, matched for age, gender, and SES participated. Each child was tested in IDS and ADS on different days. At each visit, children were trained to map two novel labels to objects, counterbalanced across visits. Following training, accuracy and RT were assessed for both novel and familiar words. Vocabulary size was assessed using the McArthur-Bates Communicative Development Inventory. Results. In the familiar word condition, for the CHL accuracy was significantly better in IDS than ADS, and RT was faster in IDS than ADS (but not significant). For CNH, accuracy was not different in IDS than in ADS, but RT was significantly faster in ADS than IDS. A significant speech type by group interaction was found (p \u3c.05), for both accuracy and RT. Follow-up tests showed that CNH have higher accuracy and faster RTs than CHL. Results for familiar words suggest that while IDS may lead to more efficient speech processing for CHL, CNH are more efficient at processing ADS. In the novel word condition, only 10 CHL completed the task. For CHL, accuracy was marginally better in IDS than ADS, but no significant difference was observed in RT. For CNH no differences were seen in accuracy or RT between IDS and ADS. Analysis of Variance for RT showed that CNH have significantly faster RTs than CHL for novel word processing. For CHL, vocabulary size was negatively correlated with RT to familiar words in IDS and ADS, suggesting that children with larger vocabularies processed familiar words faster than children with smaller vocabularies. For the CHN, vocabulary size was marginally correlated with accuracy and RT to novel words in ADS. Conclusions. This study demonstrates 1) the facilitative effects of IDS on word processing for young CHL, and 2) the relationship between word processing and expressive vocabulary in young children, suggesting that children with larger vocabularies are faster and more efficient at word processing tasks. The present findings suggest that CHL do not perform as well as their normal hearing peers on word processing tasks in ADS. These findings provide empirical evidence that childhood hearing loss affects processing of IDS and ADS differently than for CNH

    Musical Meter: Examining Hierarchical Temporal Perception in Complex Musical Stimuli Across Human Development, Sensory Modalities, and Expertise

    Full text link
    Performing, listening, and moving to music are universal human behaviors. Most music in the world is organized temporally with faster periodicities nested within slower periodicities, creating a perceptual hierarchy of repeating stronger (downbeat) and weaker (upbeat) events. This perceptual organization is theorized to aid our abilities to synchronize our behaviors with music and other individuals, but there is scant empirical evidence that listeners actively perceive these multiple levels of temporal periodicities simultaneously. Furthermore, there is conflicting evidence about when, and how, the ability to perceive the beat in music emerges during development. It is also unclear if this hierarchical organization of musical time is unique to – or heavily reliant upon – the precise timing capabilities of the auditory system, or if it is found in other sensory systems. Across three series of experiments, I investigated whether listeners perceive multiple levels of structure simultaneously, how experience and expertise influence this ability, the emergence of meter perception in development, and how strong the auditory advantage for beat and meter perception is over visual meter perception. In Chapter 1, I demonstrated that older, but not younger, infants showed evidence of the beginnings of beat perception in their ability to distinguish between synchronous and asynchronous audiovisual displays of dancers moving to music. In Chapter 2, I demonstrated that adults, but not children, showed evidence of perceiving multiple levels of metrical structure simultaneously in complex, human-performed music, and this ability was not greatly dependent upon formal musical training. Older children were more sensitive to beat than younger children, suggesting beat and meter perception develops gradually throughout childhood into adolescence. However, perception of multiple levels of meter was not evident in younger children, and likely does not emerge until late adolescence. Formal musical training was associated with enhanced meter perception in adults and beat perception in children. In Chapter 3, both adults and children demonstrated an auditory advantage for beat perception over visual. However, adults did not show an auditory advantage for the perception of slower beat levels (measure) or the perception of multiple beat levels simultaneously. Children did not show evidence of measure-level perception in either modality, but their ability to perceive the beat in both auditory and visual metronomes improved with age. Overall, the results of the three series of experiments demonstrate that beat and meter perception develop quite gradually throughout childhood, rely on lifelong acquisition of musical knowledge, and that there is a distinct auditory advantage for the perception of beat

    When eye meets ear : an investigation of audiovisual speech and non-speech perception in younger and older adults

    Get PDF
    This dissertation addressed important questions regarding audiovisual (AV) perception. Study 1 revealed that AV speech perception modulated auditory processes, whereas AV non-speech perception affected visual processes. Interestingly, stimulus identification improved, yet fewer neural resources, as reflected in smaller event-related potentials, were recruited, indicating that AV perception led to multisensory efficiency. Also, AV interaction effects were observed of early and late stages, demonstrating that multisensory integration involved a neural network. Study 1 showed that multisensory efficiency is a common principle in AV speech and non-speech stimulus recognition, yet it is reflected in different modalities, possibly due to sensory dominance of a given task. Study 2 extended our understanding of multisensory interaction by investigating electrophysiological processes of AV speech perception in noise and whether those differ between younger and older adults. Both groups revealed multisensory efficiency. Behavioural performance improved while the auditory N1 amplitude was reduced during AV relative to unisensory speech perception. This amplitude reduction could be due to visual speech cues providing complementary information, therefore reducing processing demands for the auditory system. AV speech stimuli also led to an N1 latency shift, suggesting that auditory processing was faster during AV than during unisensory trials. This shift was more pronounced in older than in younger adults, indicating that older adults made more effective use of visual speech. Finally, auditory functioning predicted the degree of the N1 latency shift, which is consistent with the inverse effectiveness hypothesis which argues that the less effective the unisensory perception was, the larger was the benefit derived from AV speech cues. These results suggest that older adults were better "lip/speech" integrators than younger adults, possibly to compensate for age-related sensory deficiencies. Multisensory efficiency was evident in younger and older adults but it might be particularly relevant for older adults. If visual speech cues could alleviate sensory perceptual loads, the remaining neural resources could be allocated to higher level cognitive functions. This dissertation adds further support to the notion of multisensory interaction modulating sensory-specific processes and it introduces the concept of multisensory efficiency as potential principle underlying AV speech and non-speech perceptio

    Recalibration of auditory phoneme perception by lipread and lexical information

    Get PDF

    Attention Restraint, Working Memory Capacity, and Mind Wandering: Do Emotional Valence or Intentionality Matter?

    Get PDF
    Attention restraint appears to mediate the relationship between working memory capacity (WMC) and mind wandering (Kane et al., 2016). Prior work has identifed two dimensions of mind wandering—emotional valence and intentionality. However, less is known about how WMC and attention restraint correlate with these dimensions. Te current study examined the relationship between WMC, attention restraint, and mind wandering by emotional valence and intentionality. A confrmatory factor analysis demonstrated that WMC and attention restraint were strongly correlated, but only attention restraint was related to overall mind wandering, consistent with prior fndings. However, when examining the emotional valence of mind wandering, attention restraint and WMC were related to negatively and positively valenced, but not neutral, mind wandering. Attention restraint was also related to intentional but not unintentional mind wandering. Tese results suggest that WMC and attention restraint predict some, but not all, types of mind wandering
    • …
    corecore