175 research outputs found

    Unusual Prosodic Descriptors in Young, Verbal Children with Autism Spectrum Disorders

    Get PDF
    This study aimed to determine which prosodic descriptors best characterized the speech of children with autism spectrum disorders (ASD) and whether these descriptors (e.g., sing-song and monotone) are acoustically different. Two listeners\u27 auditory perceptions of the speech of the children with ASD and the pitch of the speech samples were analyzed. The results suggest that individual children are characterized by a variety of prosodic descriptors. Some thought groups were described as both sing-song and monotone, however, most children appear to be either more monotone or more sing-song. Furthermore, the subjective and acoustic data suggest a strong relationship between atypical intonation and sing-song perceptions as well as atypical rhythm and monotone perceptions. Implications for an earlier diagnosis of ASD and for the development of therapy tasks to target these deficits are discussed

    THE SEMANTIC AND ACOUSTIC VOICE FEATURES DIFFERENTIATING NEUTRAL AND TRAUMATIC NARRATIVES

    Get PDF
    This dissertation is a quantitative and qualitative exploration of how one linguistically communicates emotions through an autobiographical narrative. Psycholinguistic research has affirmed that linguistic features of a narrative, including semantic and acoustic features, indicate a narrator’s emotions and physiological. This study investigated whether these linguistic features could help differentiate between trauma and neutral narratives and if they can predict autobiographical narratives’ subjective trauma ratings (STR). Qualitative analyses of the positive and negative evaluative statements were also conducted, which indicated the narrators’ thought processes during recall. Twenty-two Spanish-English college students participated in this study and narrated both traumatic and neutral narratives. We measured the narratives’ proportions of anger, fear, sadness, and joy emotion-related words and referential language. For acoustic analyses, we extracted narratives’ prosodic features, including, pitch, jitter, speaking speed, and acoustic energy, and cepstral features (I.e., MFCCs). Positive and negative evaluative statements were reliably coded and extracted from the narratives. Student’s T-tests showed that neutral and trauma narratives differed significantly in emotion-related semantic and MFCC-3. We tested the linguistic features\u27 ability to predict participants’ STR for both narrative types through separate Leave One Out Cross-Validation linear regressions, which can be used efficaciously on small sample-sizes. Several semantic and acoustic features predicted the neutral narratives’ STRs. In contrast, we could not produce a statistically viable model for predicting the trauma narratives’ STR. Analyses of the evaluative statements suggest that the trauma narratives had a unique signature of negative and positive statements – in addition to trauma statements having more negative evaluations. Limitations of this dissertation suggest that future research should use a more regimented methodology if aiming to analyze acoustic features. Nevertheless, these results, although tentative due to the small sample size, reinforce the importance of psycholinguistic analyses of narratives and have implications on how to assess people\u27s emotional states during psychotherapy. The dissertation finally encourages the broader use of narratives and linguistic analyses in clinical psychology to preserve, recognize, and ameliorate traumatic experiences

    Pragmatics, Prosody, and Social Skills of School-Age Children with Language-Learning Differences

    Get PDF
    Social skills are an important aspect of child development that continues to have influences in adolescence and adulthood (Hart, Olsen, Robinson, & Mandleco, 1997). Interacting in a social world requires an integration of many abilities that include social skills and emotional understanding of oneself and other persons. Children who have difficulties with interpreting social cues (e.g., identifying basic emotions and responding to cues in speech) have immediate and progressive consequences in both academics and social living. Children with typical language skills are successfully interacting with peers and acknowledging social rules for different environments (e.g., playing at school vs. playing at home). In contrast, children with language impairments struggle with using social skills that result in negative experiences in peer interactions (Horowitz, Jansson, Ljungberg, & Hedenbro, 2006). This study explored the social profiles of second grade children with a range of language abilities (e.g., children with low and high levels of language) as they interpret emotions in speech and narrative tasks. Multiple informants (i.e., parents, teachers, speech-language pathologist, and peers) evaluated social skills from different perspectives. A multi-interactional approach explained children’s social-emotional development from three theoretical perspectives: pragmatics, cognition, and emotional understanding. Forty-one second grade children completed a battery of tests that evaluated cognitive measures, language ability, and social skills. Each participant completed three experimental tasks (perception, imitation, and narrative) that examined how children process emotional cues in speech and narratives. A sociometric classification profiled children’s social skills and peer relationships. Results indicated that children with a range of language abilities (i.e., children with low and high levels of language skills) processed emotional cues in speech. Four acoustic patterns significantly related to how children differentiate emotions in speech. Additionally, language ability was a significant factor in the ability to infer emotions in narratives and judge social skills. Children with high language scores were more liked by peers and received better ratings on the teacher questionnaires. This study provides preliminary evidence that children with low and high levels of language abilities are able to interpret emotional cues in speech but differed in the ability to infer emotions in narratives

    Motion-capture patterns of dynamic facial expressions in children and adolescents with and without ASD

    Get PDF
    Research shows that neurotypical individuals struggle to interpret the emotional facial expressions of people with Autism Spectrum Disorder (ASD). The current study uses motion-capture to objectively quantify differences between the movement patterns of emotional facial expressions of individuals with and without ASD. Participants volitionally mimicked emotional expressions while wearing facial markers. Recorded marker movement was grouped by expression valence and intensity. We used Growth Curve Analysis to test whether movement patterns were predictable by expression type and participant group. Results show significant interactions between expression type and group, and little effect of emotion valence on ASD expressions. Together, results support perceptions that expressions of individuals with ASD are different from -- and more ambiguous than -- those of neurotypical individuals’

    Automatic vocal recognition of a child's perceived emotional state within the Speechome corpus

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2010.Cataloged from PDF version of thesis.Includes bibliographical references (p. 137-149).With over 230,000 hours of audio/video recordings of a child growing up in the home setting from birth to the age of three, the Human Speechome Project has pioneered a comprehensive, ecologically valid observational dataset that introduces far-reaching new possibilities for the study of child development. By offering In vivo observation of a child's daily life experience at ultra-dense, longitudinal time scales, the Speechome corpus holds great potential for discovering developmental insights that have thus far eluded observation. The work of this thesis aspires to enable the use of the Speechome corpus for empirical study of emotional factors in early child development. To fully harness the benefits of Speechome for this purpose, an automated mechanism must be created to perceive the child's emotional state within this medium. Due to the latent nature of emotion, we sought objective, directly measurable correlates of the child's perceived emotional state within the Speechome corpus, focusing exclusively on acoustic features of the child's vocalizations and surrounding caretaker speech. Using Partial Least Squares regression, we applied these features to build a model that simulates human perceptual heuristics for determining a child's emotional state. We evaluated the perceptual accuracy of models built across child-only, adult-only, and combined feature sets within the overall sampled dataset, as well as controlling for social situations, vocalization behaviors (e.g. crying, laughing, babble), individual caretakers, and developmental age between 9 and 24 months. Child and combined models consistently demonstrated high perceptual accuracy, with overall adjusted R-squared values of 0.54 and 0.58, respectively, and an average of 0.59 and 0.67 per month. Comparative analysis across longitudinal and socio-behavioral contexts yielded several notable developmental and dyadic insights. In the process, we have developed a data mining and analysis methodology for modeling perceived child emotion and quantifying caretaker intersubjectivity that we hope to extend to future datasets across multiple children, as new deployments of the Speechome recording technology are established. Such large-scale comparative studies promise an unprecedented view into the nature of emotional processes in early childhood and potentially enlightening discoveries about autism and other developmental disorders.by Sophia Yuditskaya.S.M

    Paralinguistic event detection in children's speech

    Get PDF
    Paralinguistic events are useful indicators of the affective state of a speaker. These cues, in children's speech, are used to form social bonds with their caregivers. They have also been found to be useful in the very early detection of developmental disorders such as autism spectrum disorder (ASD) in children's speech. Prior work on children's speech has focused on the use of a limited number of subjects which don't have sufficient diversity in the type of vocalizations that are produced. Also, the features that are necessary to understand the production of paralinguistic events is not fully understood. To account for the lack of an off-the-shelf solution to detect instances of laughter and crying in children's speech, the focus of the thesis is to investigate and develop signal processing algorithms to extract acoustic features and use machine learning algorithms on various corpora. Results obtained using baseline spectral and prosodic features indicate the ability of the combination of spectral, prosodic, and dysphonation-related features that are needed to detect laughter and whining in toddlers' speech with different age groups and recording environments. The use of long-term features were found to be useful to capture the periodic properties of laughter in adults' and children's speech and detected instances of laughter to a high degree of accuracy. Finally, the thesis focuses on the use of multi-modal information using acoustic features and computer vision-based smile-related features to detect instances of laughter and to reduce the instances of false positives in adults' and children's speech. The fusion of the features resulted in an improvement of the accuracy and recall rates than when using either of the two modalities on their own.Ph.D

    Models and Analysis of Vocal Emissions for Biomedical Applications

    Get PDF
    The MAVEBA Workshop proceedings, held on a biannual basis, collect the scientific papers presented both as oral and poster contributions, during the conference. The main subjects are: development of theoretical and mechanical models as an aid to the study of main phonatory dysfunctions, as well as the biomedical engineering methods for the analysis of voice signals and images, as a support to clinical diagnosis and classification of vocal pathologies

    Drumming To Communicate Emotion: Dual-Brain Imaging Informs An Intervention In A Carceral Setting

    Get PDF
    Drumming is an ancient nonverbal communication modality for expression of emotion. However, there has been limited exploration of its possible applications in clinical settings. Further, the underlying neural systems engaged during live communication with drumming have not been identified. We investigated the neural response to live, natural communication of emotion via drumming using a novel dual-brain neuroimaging paradigm to discover its unique neurophysiological mechanisms related to drum behavior and cross-brain coherence, and as compared to talking. We then investigated the application of a drumming intervention in an incarcerated, halfway house population to characterize intervention feasibility, elucidate the phenomenology of social and emotional effects of group drumming, and identify its possible benefits for treatment engagement and community reintegration. For neural response investigation, hemodynamic signals were acquired using whole-head functional near infrared spectroscopy. Dyads of 36 subjects participated in two conditions, drumming and talking, alternating between “sending” (drumming or talking to partner) and “receiving” (listening to partner) in response to emotionally salient images from the International Affective Picture System. Results indicated that increased frequency and amplitude of drum strikes was behaviorally correlated with higher arousal and lower valence measures, and neurally correlated with temporoparietal junction (TPJ) activation in the listener. Contrast comparisons of drumming greater than talking also revealed neural activity in right TPJ. For the interventional investigation, a group drumming program was implemented once a week for eight weeks for incarcerated participants in a halfway house. Twenty-eight participants were randomized to either the drum group or treatment as usual. Interviews and a focus group were conducted to assess the experienced benefits of the group drumming intervention, and halfway house retention rates were compared across groups. Retention rate was significantly higher in the drum group than in the treatment as usual group. Qualitative analysis elicited three themes: group drumming 1) functions therapeutically as a method of coping with difficulty, 2) offers opportunity for connection through building relationship and experiencing communion in a setting where isolation is the norm, and 3) provides an environment for personal growth, particularly toward re-humanization and self-empowerment. Neural findings suggest that emotional content communicated by drumming engages right TPJ mechanisms in an emotionally and behaviorally sensitive fashion; interventional findings suggest significant therapeutic potential in social and emotional domains that can have quantifiable impact on recovery process. Together, findings suggest that drumming may provide access to neural mechanisms with known sensitivity to social and emotional conditions that facilitates therapeutic aims. Informed by this research, drumming may provide novel, effective clinical approaches for treating social-emotional psychopathology
    • 

    corecore