18 research outputs found

    Is Maternal Touch Used Referentially?

    Get PDF
    Early social interactions are highly multimodal and include a wealth of cues (e.g., speech, facial expressions, motion, gestures and touch). Infant-directed speech (IDS) by itself may aid in language development. Touch by itself has been also shown to play an important role in dyadic interactions affecting both the infant and the caregiver. However, little is known about the impact of the combination of these two modes of communication on infant language development. In this thesis, I hypothesize that caregiver touch is provided in synchrony with speech, providing the language-learning infant with cues that may not only help her to find words in the continuous stream of speech, but also to map between words and their referents. I examined the naturalistic use of touch by having mother read books to their 5-month-old infants. Results suggest that caregivers temporally align touches with the production of target words. Thus, the infant is provided with yet another cue to segment the speech stream and pull out the words produced by the caregiver. In addition, results suggest that caregivers tend to touch in locations congruent with their speech (e.g., touch the belly while saying the word belly). This might highlight the meaning of target words to infants through the use of touch. Thus, results suggest that caregiver touch may be useful to the language learning infant for both segmentation and word learning

    Do Children Use Multi-Word Information in Real-Time Sentence Comprehension?

    Get PDF
    Meaning in language emerges from multiple words, and children are sensitive to multi-word frequency from infancy. While children successfully use cues from single words to generate linguistic predictions, it is less clear whether and how they use multi-word sequences to guide real-time language processing and whether they form predictions on the basis of multi-word information or pairwise associations. We address these questions in two visual-world eye-tracking experiments with 5- to 8-year-old children. In Experiment 1, we asked whether children generate more robust predictions for the sentence-final object of highly frequent sequences (e.g., "Throw the ball"), compared to less frequent sequences (e.g., "Throw the book"). We further examined if gaze patterns reflect event knowledge or phrasal frequency by comparing the processing of phrases that have the same event structure but differ in multi-word content (e.g., "Brush your teeth" vs. "Brush her teeth"). In the second study, we employed a training paradigm to ask if children are capable of generating predictio.ns from novel multi-word associations while controlling for the overall frequency of the sequences. While the results of Experiment 1 suggested that children primarily relied on event associations to generate real-time predictions, those of Experiment 2 showed that the same children were able to use recurring novel multi-word sequences to generate real-time linguistic predictions. Together, these findings suggest that children can draw on multi-word information to generate linguistic predictions, in a context-dependent fashion, and highlight the need to account for the influence of multi-word sequences in models of language processing

    Building a Multimodal Lexicon: Lessons from Infants' Learning of Body Part Words

    Get PDF
    Human children outperform artificial learners because the former quickly acquire a multimodal, syntactically informed, and ever-growing lexicon with little evidence. Most of this lexicon is unlabelled and processed with unsupervised mechanisms, leading to robust and generalizable knowledge. In this paper, we summarize results related to 4-month-olds’ learning of body part words. In addition to providing direct experimental evidence on some of the Workshop’s assumptions, we suggest several avenues of research that may be useful to those developing and testing artificial learners. A first set of studies using a controlled laboratory learning paradigm shows that human infants learn better from tactile-speech than visual-speech co-occurrences, suggesting that the signal/modality should be considered when designing and exploiting multimodal learning tasks. A series of observational studies document the ways in which parents naturally structure the multimodal information they provide for infants, which probably happens in lexically specific ways. Finally, our results suggest that 4-month-olds can pick up on co-occurrences between words and specific touch locations (a prerequisite of learning an association between a body part word and the referent on the child’s own body) after very brief exposures, which we interpret as most compatible with unsupervised predictive models of learning

    CNN Based Touch Interaction Detection for Infant Speech Development

    Get PDF
    In this paper, we investigate the detection of interaction in videos between two people, namely, a caregiver and an infant. We are interested in a particular type of human interaction known as touch, as touch is a key social and emotional signal used by caregivers when interacting with their children. We propose an automatic touch event recognition method to determine the potential time interval when the caregiver touches the infant. In addition to label the touch events, we also classify them into six touch types based on which body part of infant has been touched. CNN based human pose estimation and person segmentation are used to analyze the spatial relationship between the caregivers hands and the infants. We demonstrate promising results for touch detection and show great potential of reducing human effort in manually generating precise touch annotations

    The Effect of Somatosensory Input on Word Recognition in Typical Children and Those With Speech Sound Disorder

    Get PDF
    Purpose: Recent work suggests that speech perception is influenced by the somatosensory system and that oral sensorimotor disruption has specific effects on the perception of speech both in infants who have not yet begun to talk and in older children and adults with ample speech production experience; however, we do not know how such disruptions affect children with speech sound disorder (SSD). Response to disruption of would-be articulators during speech perception could reveal how sensorimotor linkages work for both typical and atypical speech and language development. Such linkages are crucial to advancing our knowledge on how both typically developing and atypically developing children produce and perceive speech. Method: Using a looking-while-listening task, we explored the impact of a sensorimotor restrictor on the recognition of words whose onsets involve late-developing sounds (s, ʃ) for both children with typical development (TD) and their peers with SSD. Results: Children with SSD showed a decrement in performance when they held a restrictor in their mouths during the task, but this was not the case for children with TD. This effect on performance was only observed for the specific speech sounds blocked by the would-be articulators. Conclusion: We argue that these findings provide evidence for altered perceptual motor pathways in children with SSD. Supplemental Material: https://doi.org/10.23641/asha.2180944

    Touch Event Recognition For Human Interaction

    Get PDF
    This paper investigates the interaction between two people, namely, a caregiver and an infant. A particular type of action in human interaction known as “touch” is described. We propose a method to detect “touch event” that uses color and motion features to track the hand positions of the caregiver. Our approach addresses the problem of hand occlusions during tracking. We propose an event recognition method to determine the time when the caregiver touches the infant and label it as a “touch event” by analyzing the merging contours of the caregiver’s hands and the infant’s contour. The proposed method shows promising results compared to human annotated dat

    Dyadic Synchrony and Responsiveness in the First Year: Associations with Autism Risk

    Get PDF
    In the first year of life, the ability to engage in sustained synchronous interactions develops as infants learn to match social partner behaviors and sequentially regulate their behaviors in response to others. Difficulties developing competence in these early social building blocks can impact later language skills, joint attention, and emotion regulation. For children at elevated risk for autism spectrum disorder (ASD), early dyadic synchrony and responsiveness difficulties may be indicative of emerging ASD and/or developmental concerns. As part of a prospective developmental monitoring study, infant siblings of children with ASD (high-risk group n = 104) or typical development (low-risk group n = 71), and their mothers completed a standardized play task when infants were 6, 9, and/or 12 months of age. These interactions were coded for the frequency and duration of infant and mother gaze, positive affect, and vocalizations, respectively. Using these codes, theory-driven composites were created to index dyadic synchrony and infant/maternal responsiveness. Multilevel models revealed significant risk group differences in dyadic synchrony and infant responsiveness by 12 months of age. In addition, high-risk infants with higher dyadic synchrony and infant responsiveness at 12 months received significantly higher receptive and expressive language scores at 36 months. The findings of the present study highlight that promoting dyadic synchrony and responsiveness may aid in advancing optimal development in children at elevated risk for autism. Lay Summary: In families raising children with an autism spectrum disorder (ASD), younger siblings are at elevated risks for social communication difficulties. The present study explored whether social-communication differences were evident during a parent–child play task at 6, 9, and 12 months of age. For infant siblings of children with ASD, social differences during play were observed by 12 months of age and may inform ongoing monitoring and intervention efforts

    The Effect of Multimodal Infant-Directed Communication on Language Acquisition

    No full text
    Human infants experience a rich complex of intertwined and patterned cues from their surrounding environment. For example, infant-directed speech is systematically combined with a variety of non-speech cues that provide valuable information concerning the location and meaning of linguistic units. Little work has examined the role that infant-directed touch plays when combined with infant-directed speech. This neglect of touch cues in the study of multimodal infant-directed communication is surprising given the prominence of infant-directed touch in early infancy. This dissertation addresses this gap by investigating how infant-directed touch is used with infant-directed speech and how infants may benefit from this combination of cues. Specifically, I examine the use of touch cues in combination with speech when access to auditory input is limited and examine whether attention plays a role in infants’ ability to track the frequency of audio-tactile events in continuous speech. ^ In Chapter 1, I review the literature on the multimodality of infant-directed communication, focusing specifically on touch and speech. In Chapter 2, I show that when access to auditory input is limited, caregivers modify their multimodal infant-directed communication when interacting with their children who are deaf or hard of hearing. Specifically, findings show that compared to caregivers of children with normal hearing, caregivers of children who are deaf or hard of hearing are more variable in the frequency with which they use touch. Further, caregivers of children who are deaf or hard of hearing are more likely to align their touches with utterances directed to their children while maintaining fine temporal alignment between the two streams. In Chapter 3, I show that when infants are presented with audio-tactile events in a controlled experimental setting, they are able to use the cross-modal transitional probabilities of these events to segment the speech stream; yet, attention, as measured through heart-rate deceleration, does not mediate this ability. Thus, these results show that audio-tactile events do not always trigger heart rate-defined sustained attention, and the proportion with which they do so is not related to how predictable these events are. ^ Taken together, the results of these studies shed light on the role that touch may play in early language acquisition when it is presented within multimodal infant-directed input; yet, they emphasize the need for further research and the need for more controlled experiments to directly examine the impact of touch on infants’ attention to multimodal input. These issues, along with ideas for future directions are addressed in Chapter 4.

    Does early unit size impact the formation of linguistic predictions? Grammatical gender as a case study

    No full text
    Language learning and processing are impacted by linguistic predictions based on distributional information. Several learning models propose that the order in which distributional information is presented can impact the strength of the predictions learners form. Here, we use a visual-world paradigm to ask how exposure to unsegmented vs. segmented input can impact learning of the association between words by changing the size of early units. Specifically, we ask if an increased reliance on multiword units (generated via exposure to unsegmented input) will lead to greater facilitation in processing. The prediction that treating the article-noun initially as one unit will facilitate noun processing is made by the Starting Big approach (Arnon, 2010), but has not been tested. The study serves as a replication and extension of Siegelman and Arnon (2015) where adults showed better learning of article-noun pairings when exposed first to unsegmented input

    Multimodal infant-directed communication: How caregivers combine tactile and linguistic cues

    No full text
    This project examined the use of tactile cues with speech directed to infants in mother-infant book-reading interactions. Mothers in our sample read two books to their 5-month-old infants, one of which was about body-parts and the other was about animals (each book included 4 target words). Our results revealed that touch is a frequent component in those interactions and that most mothers naturally employ the use of touch when interacting with their infants in a linguistically rich interaction. Further, the majority of touch events co-occur with speech within a 250 msec window pointing to the multi-modal nature of infant-directed communication. Touch events that were accompanied by speech were longer in their duration and had twice as many beats as those touch events produced without speech. Moreover, words that were produced with touches were more exaggerated than words that were spoken without touches; i.e. the former had higher average and minimum fundamental frequency. Examining the temporal relationships between touch events and speech through a null distribution revealed that words and touches were well aligned in their onsets, and the variance of the onset alignment of words and touches was significantly lower in the observed distribution than in the null distribution. Moreover, a semantics-based analyses of the data revealed that the proportion of touches that were congruent with the words that accompanied them in terms of the location of the touch and the meaning of the word was significantly higher than would be expected by chance. Hence, our findings suggest that touch might not only serve as an accompanying cue to speech; it is also aligned with speech in a way that might be beneficial for the language-learning infant in identifying edges of words as well as the meaning of some words like body-part words that can produced in congruence with the location of the touch
    corecore