124 research outputs found
Word Learning in 6-16 Month Old Infants
Understanding words requires infants to not only isolate words from the speech around them and delineate concepts from their world experience, but also to establish which words signify which concepts, in all and only the right set of circumstances. Previous research places the onset of this ability around infants\u27 first birthdays, at which point they have begun to solidify their native language phonology, and have learned a good deal about categories, objects, and people. In this dissertation, I present research that alters this accepted timeline. In Study 1, I find that by 6 months of age, infants demonstrate understanding of around a dozen words for foods and body parts. Around 13-14 months of age, performance increases significantly. In Study 2, I find that for a set of early non-nouns, e.g. `uh-oh\u27 and `eat\u27, infants do not show understanding until 10 months, but again show a big comprehension boost around 13-14 months. I discuss possible reasons for the onset of noun-comprehension at 6 months, the relative delay in non-noun comprehension, and the performance boost for both word-types around 13-14 months. In Study 3, I replicate and extend Study 1\u27s findings, showing that around 6 months infants also understand food and body-part words when these words are spoken by a new person, but conversely, by 12 months, show poor word comprehension if a single vowel in the word is changed, even when the speaker is highly familiar. Taken together, these results suggest that word learning begins before infants have fully solidified their native language phonology, that certain generalizations about words are available to infants at the outset of word comprehension, and that infants are able to learn words for complex object and event categories before their first birthday. Implications for language acquisition and cognitive development more broadly are discussed
Mothersâ work status and 17âmonthâoldsâ productive vocabulary
Literature examining the effects of mothersâ work status on infant language development is mixed, with little focus on varying work schedules and early vocabulary. We use naturalistic data to analyze the productive vocabulary of 44 17âmonthâolds in relation to mothersâ work status (full time, part time, stay at home) at 6 and 18 months. Infants who experienced a combination of care from mothers and other caretakers had larger productive vocabularies than infants in solely fullâtime maternal or solely otherâcaretaker care. Our results draw from naturalistic data to suggest that this care combination may be particularly beneficial for early lexical development
Analyzing the effect of sibling number on input and output in the first 18 months
Prior research suggests that across a wide range of cognitive, educational, and health-based measures, first-born children outperform their later-born peers. Expanding on this literature using naturalistic home-recorded data and parental vocabulary reports, we find that early language outcomes vary by number of siblings in a sample of 43 English-learning U.S. children from mid-to-high socioeconomic status homes. More specifically, we find that children in our sample with two or moreâbut not oneâolder siblings had smaller productive vocabularies at 18 months, and heard less input from caregivers across several measures than their peers with less than two siblings. We discuss implications regarding what infants experience and learn across a range of family sizes in infanc
Developing a cross-cultural annotation system and metacorpus for studying infants' real world language experience
Recent issues around reproducibility, best practices, and cultural bias impact naturalistic observational approaches as much as experimental approaches, but there has been less focus onthis area. Here, we present a new approach that leverages cross-laboratory collaborative, interdisciplinary efforts to examine important psychological questions. We illustrate this approach with a particular project that examines similarities and differences in children's early experiences with language. This project develops a comprehensive start-to-finish analysis pipeline by developing a flexible and systematic annotation system, and implementing this system across a sampling from a metacorpus of audiorecordings of diverse language communities. This resource is publicly available for use, sensitive to cultural differences, and flexible to address a variety of research questions. It is also uniquely suited for use in the development of tools for automated analysis.Fil: Soderstrom, Melanie. University of Manitoba; CanadĂĄFil: Casillas, Marisa. University of Chicago; Estados UnidosFil: Bergelson, Elika. University of Duke; Estados UnidosFil: Rosemberg, Celia Renata. Consejo Nacional de Investigaciones CientĂficas y TĂ©cnicas. Oficina de CoordinaciĂłn Administrativa Saavedra 15. Centro Interdisciplinario de Investigaciones en PsicologĂa MatemĂĄtica y Experimental Dr. Horacio J. A. Rimoldi; ArgentinaFil: Alam, Florencia. Consejo Nacional de Investigaciones CientĂficas y TĂ©cnicas. Oficina de CoordinaciĂłn Administrativa Saavedra 15. Centro Interdisciplinario de Investigaciones en PsicologĂa MatemĂĄtica y Experimental Dr. Horacio J. A. Rimoldi; ArgentinaFil: Warlaumont, Anne S.. University of California at Los Angeles; Estados UnidosFil: Bunce, John. California State University; Estados Unido
Quantifying child directed speech cross-culturally across development
Child-directed speech (CDS) influences language development (e.g., Golinkoff et al., 2015), but varies across cultural and demographic groups (Hoff, 2006). Recent work examining speech heard by North American English (NAE) infants found an increased proportion of CDS with age (Bergelson et al., 2018). Quantity of CDS remained relatively constant across age, while quantity of adult-directed speech (ADS) decreased. We replicate these findings using a different methodology, and expand them to include other language communities. Our data come from daylong audio recordings of 58 children ages 2?36 months from the ACLEW dataset (Bergelson et al., 2017; 30 children acquiring NAE, 10 UK English, 8 Argentinian Spanish, and 10 Tseltal/Mayan). Ten randomly selected 2-min segments (Tseltal: nine 5-min segments) from each child were annotated for speaker gender, age (child or adult), and addressee for each utterance. We calculated the minutes per hour of CDS, ADS, and all speech. Preliminary analyses find high variability in overall language input across individuals, age, and culture, and partially replicate the Bergelson et al. (2018) pattern of results. Ongoing annotation will permit finer-grained analyses of sub-group differences. Further analyses will examine the influence of factors such as speaker gender, number of speakers, and maternal education.Fil: Bunce, John. University of Manitoba; CanadĂĄFil: Casillas, Marisa. Max Planck Institute For Psycholinguistics; PaĂses BajosFil: Bergelson, Elika. University of Duke; Estados UnidosFil: Stein, Alejandra. Consejo Nacional de Investigaciones CientĂficas y TĂ©cnicas. Oficina de CoordinaciĂłn Administrativa Saavedra 15. Centro Interdisciplinario de Investigaciones en PsicologĂa MatemĂĄtica y Experimental Dr. Horacio J. A. Rimoldi; ArgentinaFil: Waurlamont, Anne. University of California at Los Angeles; Estados UnidosFil: Rosemberg, Celia Renata. Consejo Nacional de Investigaciones CientĂficas y TĂ©cnicas. Oficina de CoordinaciĂłn Administrativa Saavedra 15. Centro Interdisciplinario de Investigaciones en PsicologĂa MatemĂĄtica y Experimental Dr. Horacio J. A. Rimoldi; ArgentinaFil: Kirby, Jessica. University of Manitoba; CanadĂĄFil: Soderstrom, Melanie. University of Manitoba; CanadĂĄ177 Meeting of the Acoustic Society of AmericaLouisvilleEstados UnidosAcoustical Society of Americ
Recommended from our members
A cross-linguistic examination of young childrenâs everyday language experiences
We present an exploratory cross-linguistic analysis of the quantity of target-child-directed speech and adult-directed speech in North American English (US & Canadian), United Kingdom English, Argentinian Spanish, Tseltal (Tenejapa, Mayan), and YĂ©lĂź Dnye (Rossel Island, Papuan), using annotations from 69 children aged 2â36 months. Using a novel methodological approach, our cross-linguistic and cross-cultural findings support prior work suggesting that target-child-directed speech quantities are stable across early development, while adult-directed speech decreases. A preponderance of speech from women was found to a similar degree across groups, with less target-child-directed speech from men and children in the North American samples than elsewhere. Consistently across groups, children also heard more adult-directed than target-child-directed speech. Finally, the numbers of talkers present in any given clip strongly impacted childrenâs moment-to-moment input quantities. These findings illustrate how the structure of home life impacts patterns of early language exposure across diverse developmental contexts
Brouhaha: multi-task training for voice activity detection, speech-to-noise ratio, and C50 room acoustics estimation
Most automatic speech processing systems are sensitive to the acoustic
environment, with degraded performance when applied to noisy or reverberant
speech. But how can one tell whether speech is noisy or reverberant? We propose
Brouhaha, a pipeline to simulate audio segments recorded in noisy and
reverberant conditions. We then use the simulated audio to jointly train the
Brouhaha model for voice activity detection, signal-to-noise ratio estimation,
and C50 room acoustics prediction. We show how the predicted SNR and C50 values
can be used to investigate and help diagnose errors made by automatic speech
processing tools (such as pyannote.audio for speaker diarization or OpenAI's
Whisper for automatic speech recognition). Both our pipeline and a pretrained
model are open source and shared with the speech community
Quantifying sources of variability in infancy research using the infant-directed-speech preference
Psychological scientists have become increasingly concerned with issues related to methodology and replicability, and infancy researchers in particular face specific challenges related to replicability: For example, high-powered studies are difficult to conduct, testing conditions vary across labs, and different labs have access to different infant populations.
Addressing these concerns, we report on a large-scale, multisite study aimed at (a) assessing the overall replicability of a single theoretically important phenomenon and (b) examining methodological, cultural, and developmental
moderators. We focus on infantsâ preference for infant-directed speech (IDS) over adult-directed speech (ADS). Stimuli of mothers speaking to their infants and to an adult in North American English were created using seminaturalistic
laboratory-based audio recordings. Infantsâ relative preference for IDS and ADS was assessed across 67 laboratories in North America, Europe, Australia, and Asia using the three common methods for measuring infantsâ discrimination
(head-turn preference, central fixation, and eye tracking). The overall meta-analytic effect size (Cohenâs d) was 0.35, 95% confidence interval = [0.29, 0.42], which was reliably above zero but smaller than the meta-analytic mean computed from previous literature (0.67). The IDS preference was significantly stronger in older children, in those children for whom the stimuli matched their native language and dialect, and in data from labs using the head-turn preference procedure. Together, these findings replicate the IDS preference but suggest that its magnitude is modulated by development, native-language experience, and testing procedure. (This project has received funding from the European Unionâs Horizon 2020 research and innovation programme under the Marie SkĆodowska-Curie grant agreement No 798658.
Recommended from our members
Workshop on Corpus Collection, (Semi)-Automated Analysis, and Modeling ofLarge-Scale Naturalistic Language Acquisition Data
The main goal of this full-day workshop is to bring togetherresearchers from several distinct fields: behavioralpsychologists studying language acquisition, speechtechnology researchers, linguists, and computationalmodelers of cognitive development. These groups arebroadly interested in the same questions, i.e. what is thenature of speech and language, and how might a systemlearn to process it in supervised or unsupervised ways?Since the groups interested in these questions work ondifferent analysis levels, cross-pollination has been sparse.Recent technological innovations have made collectinglong naturalistic recordings of childrenâs home environmentfar simpler than in the past. However, the raw output of suchrecordings is not immediately usable for most analyses.Simultaneously, speech technology (ST) and machinelearning tools have improved immensely over the pastdecade, making it feasible to use such tools withincreasingly diverse and noise-laden data. Relatedly,cognitively viable computational models have made recentstrides in explaining learning and development, but fewsuch models can be applied to novel data-sets withoutencountering many hurdles about translatability acrossframeworks. This workshop brings together experts from allof these areas, and seeks to build bridges across them, withinsight from other similar interdisciplinary efforts in otherareas of cognitive science. Talks will discuss the matchbetween the theory-driven questions researchers would liketo ask, and the answers the current state of the art allows.The program committee is part of a newly formed groupcalled DARCLE (Daylong Audio Recordings of ChildrenâsLanguage Environment); with the help of an NSF grant,DARCLE has created a repository called HomeBank forraw data, metadata, and analysis/processing tools for long-form recordings of child language. This workshop is anopportunity to network with related efforts in Europe, andfor a talk and demo of a related effort, the NSF-fundedSpeech Recognition Virtual Kitche
Young toddlers' word comprehension is flexible and efficient.
Much of what is known about word recognition in toddlers comes from eyetracking studies. Here we show that the speed and facility with which children recognize words, as revealed in such studies, cannot be attributed to a task-specific, closed-set strategy; rather, children's gaze to referents of spoken nouns reflects successful search of the lexicon. Toddlers' spoken word comprehension was examined in the context of pictures that had two possible names (such as a cup of juice which could be called "cup" or "juice") and pictures that had only one likely name for toddlers (such as "apple"), using a visual world eye-tracking task and a picture-labeling task (nâ=â77, mean age, 21 months). Toddlers were just as fast and accurate in fixating named pictures with two likely names as pictures with one. If toddlers do name pictures to themselves, the name provides no apparent benefit in word recognition, because there is no cost to understanding an alternative lexical construal of the picture. In toddlers, as in adults, spoken words rapidly evoke their referents
- âŠ