90 research outputs found

    Yoruba Vowel Elision and Compounding

    Get PDF

    Contending with foreign accent variability in early lexical acquisition.

    Get PDF
    By their second birthday, children are beginning to map meaning to form with relative ease. One challenge for these developing abilities is separating information relevant to word identity (i.e. phonemic information) from irrelevant information (e.g. voice and foreign accent). Nevertheless, little is known about toddlers’ abilities to ignore irrelevant phonetic detail when faced with the demanding task of word learning. In an experiment with English-learning toddlers, we examined the impact of foreign accent on word learning. Findings revealed that while toddlers aged 2; 6 successfully generalized newly learned words spoken by a Spanish-accented speaker and a native English speaker, success of those aged 2;0 was restricted. Specifically, toddlers aged 2;0 failed to generalize words when trained by the native English speaker and tested by the Spanish-accented speaker. Data suggest that exposure to foreign accent in training may promote generalization of newly learned forms. These findings are considered in the context of developmental changes in early word representations

    Learning classes of sounds in infancy

    Get PDF
    Adults\u27 phonotactic learning is affected by perceptual biases. One such bias concerns learning of constraints affecting groups of sounds: all else being equal, learning constraints affecting a natural class (a set of sounds sharing some phonetic characteristic) is easier than learning a constraint affecting an arbitrary set of sounds. This perceptual bias could be a given, for example, the result of innately guided learning; alternatively, it could be due to human learners’ experience with sounds. Using artificial grammars, we investigated whether such a bias arises in development, or whether it is present as soon as infants can learn phonotactics. Seven-month-old English-learning infants fail to generalize a phonotactic pattern involving fricatives and nasals, which does not form a coherent phonetic group, but succeed with the natural class of oral and nasal stops. In this paper, we report an experiment that explored whether those results also follow in a cohort of 4-month-olds. Unlike the older infants, 4-month-olds were able to generalize both groups, suggesting that the perceptual bias that makes phonotactic constraints on natural classes easier to learn is likely the effect of experience

    English-learning infants’ perception of word stress patterns

    Get PDF
    Adult speakers of different free stress languages (e.g., English, Spanish) differ both in their sensitivity to lexical stress and in their processing of suprasegmental and vowel quality cues to stress. In a head-turn preference experiment with a familiarization phase, both 8-month-old and 12-month-old English-learning infants discriminated between initial stress and final stress among lists of Spanish-spoken disyllabic nonwords that were segmentally varied (e.g. [ˈnila, ˈtuli] vs [luˈta, puˈki]). This is evidence that English-learning infants are sensitive to lexical stress patterns, instantiated primarily by suprasegmental cues, during the second half of the first year of life

    CNN Based Touch Interaction Detection for Infant Speech Development

    Get PDF
    In this paper, we investigate the detection of interaction in videos between two people, namely, a caregiver and an infant. We are interested in a particular type of human interaction known as touch, as touch is a key social and emotional signal used by caregivers when interacting with their children. We propose an automatic touch event recognition method to determine the potential time interval when the caregiver touches the infant. In addition to label the touch events, we also classify them into six touch types based on which body part of infant has been touched. CNN based human pose estimation and person segmentation are used to analyze the spatial relationship between the caregivers hands and the infants. We demonstrate promising results for touch detection and show great potential of reducing human effort in manually generating precise touch annotations

    Building a Multimodal Lexicon: Lessons from Infants' Learning of Body Part Words

    Get PDF
    Human children outperform artificial learners because the former quickly acquire a multimodal, syntactically informed, and ever-growing lexicon with little evidence. Most of this lexicon is unlabelled and processed with unsupervised mechanisms, leading to robust and generalizable knowledge. In this paper, we summarize results related to 4-month-olds’ learning of body part words. In addition to providing direct experimental evidence on some of the Workshop’s assumptions, we suggest several avenues of research that may be useful to those developing and testing artificial learners. A first set of studies using a controlled laboratory learning paradigm shows that human infants learn better from tactile-speech than visual-speech co-occurrences, suggesting that the signal/modality should be considered when designing and exploiting multimodal learning tasks. A series of observational studies document the ways in which parents naturally structure the multimodal information they provide for infants, which probably happens in lexically specific ways. Finally, our results suggest that 4-month-olds can pick up on co-occurrences between words and specific touch locations (a prerequisite of learning an association between a body part word and the referent on the child’s own body) after very brief exposures, which we interpret as most compatible with unsupervised predictive models of learning

    Bayesian Optimal Experimental Design for Constitutive Model Calibration

    Full text link
    Computational simulation is increasingly relied upon for high-consequence engineering decisions, and a foundational element to solid mechanics simulations, such as finite element analysis (FEA), is a credible constitutive or material model. Calibration of these complex models is an essential step; however, the selection, calibration and validation of material models is often a discrete, multi-stage process that is decoupled from material characterization activities, which means the data collected does not always align with the data that is needed. To address this issue, an integrated workflow for delivering an enhanced characterization and calibration procedure (Interlaced Characterization and Calibration (ICC)) is introduced. This framework leverages Bayesian optimal experimental design (BOED) to select the optimal load path for a cruciform specimen in order to collect the most informative data for model calibration. The critical first piece of algorithm development is to demonstrate the active experimental design for a fast model with simulated data. For this demonstration, a material point simulator that models a plane stress elastoplastic material subject to bi-axial loading was chosen. The ICC framework is demonstrated on two exemplar problems in which BOED is used to determine which load step to take, e.g., in which direction to increment the strain, at each iteration of the characterization and calibration cycle. Calibration results from data obtained by adaptively selecting the load path within the ICC algorithm are compared to results from data generated under two naive static load paths that were chosen a priori based on human intuition. In these exemplar problems, data generated in an adaptive setting resulted in calibrated model parameters with reduced measures of uncertainty compared to the static settings.Comment: 39 pages, 13 figure

    The Effect of Somatosensory Input on Word Recognition in Typical Children and Those With Speech Sound Disorder

    Get PDF
    Purpose: Recent work suggests that speech perception is influenced by the somatosensory system and that oral sensorimotor disruption has specific effects on the perception of speech both in infants who have not yet begun to talk and in older children and adults with ample speech production experience; however, we do not know how such disruptions affect children with speech sound disorder (SSD). Response to disruption of would-be articulators during speech perception could reveal how sensorimotor linkages work for both typical and atypical speech and language development. Such linkages are crucial to advancing our knowledge on how both typically developing and atypically developing children produce and perceive speech. Method: Using a looking-while-listening task, we explored the impact of a sensorimotor restrictor on the recognition of words whose onsets involve late-developing sounds (s, ʃ) for both children with typical development (TD) and their peers with SSD. Results: Children with SSD showed a decrement in performance when they held a restrictor in their mouths during the task, but this was not the case for children with TD. This effect on performance was only observed for the specific speech sounds blocked by the would-be articulators. Conclusion: We argue that these findings provide evidence for altered perceptual motor pathways in children with SSD. Supplemental Material: https://doi.org/10.23641/asha.2180944
    corecore