27 research outputs found
Domain general learning: Infants use social and non-social cues when learning object statistics.
Previous research has shown that infants can learn from social cues. But is a social cue more effective at directing learning than a non-social cue? This study investigated whether 9-month-old infants (N = 55) could learn a visual statistical regularity in the presence of a distracting visual sequence when attention was directed by either a social cue (a person) or a non-social cue (a rectangle). The results show that both social and non-social cues can guide infants' attention to a visual shape sequence (and away from a distracting sequence). The social cue more effectively directed attention than the non-social cue during the familiarization phase, but the social cue did not result in significantly stronger learning than the non-social cue. The findings suggest that domain general attention mechanisms allow for the comparable learning seen in both conditions
Recommended from our members
Children and adults produce distinct technology- and human-directed speech.
This study compares how English-speaking adults and children from the United States adapt their speech when talking to a real person and a smart speaker (Amazon Alexa) in a psycholinguistic experiment. Overall, participants produced more effortful speech when talking to a device (longer duration and higher pitch). These differences also varied by age: children produced even higher pitch in device-directed speech, suggesting a stronger expectation to be misunderstood by the system. In support of this, we see that after a staged recognition error by the device, children increased pitch even more. Furthermore, both adults and children displayed the same degree of variation in their responses for whether Alexa seems like a real person or not, further indicating that childrens conceptualization of the systems competence shaped their register adjustments, rather than an increased anthropomorphism response. This work speaks to models on the mechanisms underlying speech production, and human-computer interaction frameworks, providing support for routinized theories of spoken interaction with technology
Infant statistical-learning ability is related to real-time language processing
Infants are adept at learning statistical regularities in artificial language materials, suggesting that the ability to learn statistical structure may support language development. Indeed, infants who perform better on statistical learning tasks tend to be more advanced in parental reports of infants' language skills. Work with adults suggests that one way statistical learning ability affects language proficiency is by facilitating real-time language processing. Here we tested whether 15-month-olds' ability to learn sequential statistical structure in artificial language materials is related to their ability to encode and interpret native-language speech. Specifically, we tested their ability to learn sequential structure among syllables (Experiment 1) and words (Experiment 2), as well as their ability to encode familiar English words in sentences. The results suggest that infants' ability to learn sequential structure among syllables is related to their lexical-processing efficiency, providing continuity with findings from children and adults, though effects were modest
Learning builds on learning: Infants’ use of native language sound patterns to learn words
The current research investigated how infants apply prior knowledge of environmental regularities to support new learning. The experiments tested whether infants could exploit experience with native language (English) phonotactic patterns to facilitate associating sounds with meanings during word learning. Infants (14-month-olds) heard fluent speech that contained cues for detecting target words; the target words were embedded in sequences that occur across word boundaries. A separate group heard the target words embedded without word boundary cues. Infants then participated in an object label learning task. With the opportunity to use native language patterns to segment the target words, infants subsequently learned the labels. Without this experience, infants failed. Novice word learners can take advantage of early learning about sounds to scaffold lexical development
Recommended from our members
Learning across languages: bilingual experience supports dual language statistical word segmentation
Bilingual acquisition presents learning challenges beyond those found in monolingual environments, including the need to segment speech in two languages. Infants may use statistical cues, such as syllable-level transitional probabilities, to segment words from fluent speech. In the present study we assessed monolingual and bilingual 14-month-olds' abilities to segment two artificial languages using transitional probability cues. In Experiment 1, monolingual infants successfully segmented the speech streams when the languages were presented individually. However, monolinguals did not segment the same language stimuli when they were presented together in interleaved segments, mimicking the language switches inherent to bilingual speech. To assess the effect of real-world bilingual experience on dual language speech segmentation, Experiment 2 tested infants with regular exposure to two languages using the same interleaved language stimuli as Experiment 1. The bilingual infants in Experiment 2 successfully segmented the languages, indicating that early exposure to two languages supports infants' abilities to segment dual language speech using transitional probability cues. These findings support the notion that early bilingual exposure prepares infants to navigate challenging aspects of dual language environments as they begin to acquire two languages