124 research outputs found

    Word Learning in 6-16 Month Old Infants

    Get PDF
    Understanding words requires infants to not only isolate words from the speech around them and delineate concepts from their world experience, but also to establish which words signify which concepts, in all and only the right set of circumstances. Previous research places the onset of this ability around infants\u27 first birthdays, at which point they have begun to solidify their native language phonology, and have learned a good deal about categories, objects, and people. In this dissertation, I present research that alters this accepted timeline. In Study 1, I find that by 6 months of age, infants demonstrate understanding of around a dozen words for foods and body parts. Around 13-14 months of age, performance increases significantly. In Study 2, I find that for a set of early non-nouns, e.g. `uh-oh\u27 and `eat\u27, infants do not show understanding until 10 months, but again show a big comprehension boost around 13-14 months. I discuss possible reasons for the onset of noun-comprehension at 6 months, the relative delay in non-noun comprehension, and the performance boost for both word-types around 13-14 months. In Study 3, I replicate and extend Study 1\u27s findings, showing that around 6 months infants also understand food and body-part words when these words are spoken by a new person, but conversely, by 12 months, show poor word comprehension if a single vowel in the word is changed, even when the speaker is highly familiar. Taken together, these results suggest that word learning begins before infants have fully solidified their native language phonology, that certain generalizations about words are available to infants at the outset of word comprehension, and that infants are able to learn words for complex object and event categories before their first birthday. Implications for language acquisition and cognitive development more broadly are discussed

    Mothers’ work status and 17‐month‐olds’ productive vocabulary

    Get PDF
    Literature examining the effects of mothers’ work status on infant language development is mixed, with little focus on varying work schedules and early vocabulary. We use naturalistic data to analyze the productive vocabulary of 44 17‐month‐olds in relation to mothers’ work status (full time, part time, stay at home) at 6 and 18 months. Infants who experienced a combination of care from mothers and other caretakers had larger productive vocabularies than infants in solely full‐time maternal or solely other‐caretaker care. Our results draw from naturalistic data to suggest that this care combination may be particularly beneficial for early lexical development

    Analyzing the effect of sibling number on input and output in the first 18 months

    Get PDF
    Prior research suggests that across a wide range of cognitive, educational, and health-based measures, first-born children outperform their later-born peers. Expanding on this literature using naturalistic home-recorded data and parental vocabulary reports, we find that early language outcomes vary by number of siblings in a sample of 43 English-learning U.S. children from mid-to-high socioeconomic status homes. More specifically, we find that children in our sample with two or more—but not one—older siblings had smaller productive vocabularies at 18 months, and heard less input from caregivers across several measures than their peers with less than two siblings. We discuss implications regarding what infants experience and learn across a range of family sizes in infanc

    Developing a cross-cultural annotation system and metacorpus for studying infants' real world language experience

    Get PDF
    Recent issues around reproducibility, best practices, and cultural bias impact naturalistic observational approaches as much as experimental approaches, but there has been less focus onthis area. Here, we present a new approach that leverages cross-laboratory collaborative, interdisciplinary efforts to examine important psychological questions. We illustrate this approach with a particular project that examines similarities and differences in children's early experiences with language. This project develops a comprehensive start-to-finish analysis pipeline by developing a flexible and systematic annotation system, and implementing this system across a sampling from a metacorpus of audiorecordings of diverse language communities. This resource is publicly available for use, sensitive to cultural differences, and flexible to address a variety of research questions. It is also uniquely suited for use in the development of tools for automated analysis.Fil: Soderstrom, Melanie. University of Manitoba; CanadĂĄFil: Casillas, Marisa. University of Chicago; Estados UnidosFil: Bergelson, Elika. University of Duke; Estados UnidosFil: Rosemberg, Celia Renata. Consejo Nacional de Investigaciones CientĂ­ficas y TĂ©cnicas. Oficina de CoordinaciĂłn Administrativa Saavedra 15. Centro Interdisciplinario de Investigaciones en PsicologĂ­a MatemĂĄtica y Experimental Dr. Horacio J. A. Rimoldi; ArgentinaFil: Alam, Florencia. Consejo Nacional de Investigaciones CientĂ­ficas y TĂ©cnicas. Oficina de CoordinaciĂłn Administrativa Saavedra 15. Centro Interdisciplinario de Investigaciones en PsicologĂ­a MatemĂĄtica y Experimental Dr. Horacio J. A. Rimoldi; ArgentinaFil: Warlaumont, Anne S.. University of California at Los Angeles; Estados UnidosFil: Bunce, John. California State University; Estados Unido

    Quantifying child directed speech cross-culturally across development

    Get PDF
    Child-directed speech (CDS) influences language development (e.g., Golinkoff et al., 2015), but varies across cultural and demographic groups (Hoff, 2006). Recent work examining speech heard by North American English (NAE) infants found an increased proportion of CDS with age (Bergelson et al., 2018). Quantity of CDS remained relatively constant across age, while quantity of adult-directed speech (ADS) decreased. We replicate these findings using a different methodology, and expand them to include other language communities. Our data come from daylong audio recordings of 58 children ages 2?36 months from the ACLEW dataset (Bergelson et al., 2017; 30 children acquiring NAE, 10 UK English, 8 Argentinian Spanish, and 10 Tseltal/Mayan). Ten randomly selected 2-min segments (Tseltal: nine 5-min segments) from each child were annotated for speaker gender, age (child or adult), and addressee for each utterance. We calculated the minutes per hour of CDS, ADS, and all speech. Preliminary analyses find high variability in overall language input across individuals, age, and culture, and partially replicate the Bergelson et al. (2018) pattern of results. Ongoing annotation will permit finer-grained analyses of sub-group differences. Further analyses will examine the influence of factors such as speaker gender, number of speakers, and maternal education.Fil: Bunce, John. University of Manitoba; CanadĂĄFil: Casillas, Marisa. Max Planck Institute For Psycholinguistics; PaĂ­ses BajosFil: Bergelson, Elika. University of Duke; Estados UnidosFil: Stein, Alejandra. Consejo Nacional de Investigaciones CientĂ­ficas y TĂ©cnicas. Oficina de CoordinaciĂłn Administrativa Saavedra 15. Centro Interdisciplinario de Investigaciones en PsicologĂ­a MatemĂĄtica y Experimental Dr. Horacio J. A. Rimoldi; ArgentinaFil: Waurlamont, Anne. University of California at Los Angeles; Estados UnidosFil: Rosemberg, Celia Renata. Consejo Nacional de Investigaciones CientĂ­ficas y TĂ©cnicas. Oficina de CoordinaciĂłn Administrativa Saavedra 15. Centro Interdisciplinario de Investigaciones en PsicologĂ­a MatemĂĄtica y Experimental Dr. Horacio J. A. Rimoldi; ArgentinaFil: Kirby, Jessica. University of Manitoba; CanadĂĄFil: Soderstrom, Melanie. University of Manitoba; CanadĂĄ177 Meeting of the Acoustic Society of AmericaLouisvilleEstados UnidosAcoustical Society of Americ

    Brouhaha: multi-task training for voice activity detection, speech-to-noise ratio, and C50 room acoustics estimation

    Full text link
    Most automatic speech processing systems are sensitive to the acoustic environment, with degraded performance when applied to noisy or reverberant speech. But how can one tell whether speech is noisy or reverberant? We propose Brouhaha, a pipeline to simulate audio segments recorded in noisy and reverberant conditions. We then use the simulated audio to jointly train the Brouhaha model for voice activity detection, signal-to-noise ratio estimation, and C50 room acoustics prediction. We show how the predicted SNR and C50 values can be used to investigate and help diagnose errors made by automatic speech processing tools (such as pyannote.audio for speaker diarization or OpenAI's Whisper for automatic speech recognition). Both our pipeline and a pretrained model are open source and shared with the speech community

    Quantifying sources of variability in infancy research using the infant-directed-speech preference

    Get PDF
    Psychological scientists have become increasingly concerned with issues related to methodology and replicability, and infancy researchers in particular face specific challenges related to replicability: For example, high-powered studies are difficult to conduct, testing conditions vary across labs, and different labs have access to different infant populations. Addressing these concerns, we report on a large-scale, multisite study aimed at (a) assessing the overall replicability of a single theoretically important phenomenon and (b) examining methodological, cultural, and developmental moderators. We focus on infants’ preference for infant-directed speech (IDS) over adult-directed speech (ADS). Stimuli of mothers speaking to their infants and to an adult in North American English were created using seminaturalistic laboratory-based audio recordings. Infants’ relative preference for IDS and ADS was assessed across 67 laboratories in North America, Europe, Australia, and Asia using the three common methods for measuring infants’ discrimination (head-turn preference, central fixation, and eye tracking). The overall meta-analytic effect size (Cohen’s d) was 0.35, 95% confidence interval = [0.29, 0.42], which was reliably above zero but smaller than the meta-analytic mean computed from previous literature (0.67). The IDS preference was significantly stronger in older children, in those children for whom the stimuli matched their native language and dialect, and in data from labs using the head-turn preference procedure. Together, these findings replicate the IDS preference but suggest that its magnitude is modulated by development, native-language experience, and testing procedure. (This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie SkƂodowska-Curie grant agreement No 798658.

    Young toddlers' word comprehension is flexible and efficient.

    Get PDF
    Much of what is known about word recognition in toddlers comes from eyetracking studies. Here we show that the speed and facility with which children recognize words, as revealed in such studies, cannot be attributed to a task-specific, closed-set strategy; rather, children's gaze to referents of spoken nouns reflects successful search of the lexicon. Toddlers' spoken word comprehension was examined in the context of pictures that had two possible names (such as a cup of juice which could be called "cup" or "juice") and pictures that had only one likely name for toddlers (such as "apple"), using a visual world eye-tracking task and a picture-labeling task (n = 77, mean age, 21 months). Toddlers were just as fast and accurate in fixating named pictures with two likely names as pictures with one. If toddlers do name pictures to themselves, the name provides no apparent benefit in word recognition, because there is no cost to understanding an alternative lexical construal of the picture. In toddlers, as in adults, spoken words rapidly evoke their referents
    • 

    corecore