38 research outputs found

    Speech tested for Zipfian fit using rigorous statistical techniques

    Get PDF
    Zipf’s law describes the relationship between the frequencies of words in a corpus and their rank. Its most basic form is a simple series, indicating that the frequency of a word is inverselyproportional to its rank:1/2, 1/3, 1/4,...The past two decades have seen the emergence of usage-based and cognitive approaches to language study. A key observation of these approaches, along with the importance of frequency, is that speech differs in substantial and structural ways from writing. Yet, except for a few older analyses performed on very small corpora, most studies of Zipf’s law have been done on written corpora. Further, a judgement of Zifianness in much of this work is based on loose and informal criteria.  In fact, sophisticated statistical techniques have been developed for curve fitting in recent years in the mathematics and physics literature. These include the use of the Kolmogorov-Smirnov statistic, along with maximum likelihood estimation to generate p-values and the use of the complementary error function for normal distributions. The latter helps determine if a corpus, failing a Zipfian fit, might be better described by another distribution. In this paper, we will:Show that three corpora of recorded speech follow a power law distribution using rigorous statis- tical techniques: Buckeye, Santa Barbara, MiCaseDescribe preliminary results showing that the techniques outlined in this paper may be useful in the diagnoses of those conditions that can include disordered speech.Explain how to do the analyses described in this paper.Explain how to download and use the R/Python code we have written and packaged as the Zipf Tool Ki

    LENA 4AFC human judge classification

    No full text
    raw data for human classifications of ASR-tagged audio files, includes ReadMe_ke

    Fidelity of Automatic Speech Processing for Adult and Child Talker Classifications.

    No full text
    Automatic speech processing (ASP) has recently been applied to very large datasets of naturalistically collected, daylong recordings of child speech via an audio recorder worn by young children. The system developed by the LENA Research Foundation analyzes children's speech for research and clinical purposes, with special focus on of identifying and tagging family speech dynamics and the at-home acoustic environment from the auditory perspective of the child. A primary issue for researchers, clinicians, and families using the Language ENvironment Analysis (LENA) system is to what degree the segment labels are valid. This classification study evaluates the performance of the computer ASP output against 23 trained human judges who made about 53,000 judgements of classification of segments tagged by the LENA ASP. Results indicate performance consistent with modern ASP such as those using HMM methods, with acoustic characteristics of fundamental frequency and segment duration most important for both human and machine classifications. Results are likely to be important for interpreting and improving ASP output

    Use of the LENA Autism Screen with Children who are Deaf or Hard of Hearing

    No full text
    Background and Objectives: This systematic review reports the evidence from the literature concerning the potential for using an automated vocal analysis, the Language ENvironment Analysis (LENA, LENA Research Foundation, Boulder, CO, USA) in the screening process for children at risk for autism spectrum disorder (ASD) and deaf or hard of hearing (D/HH). ASD and D/HH have increased comorbidity, but current behavioral diagnostic and screening tools have limitations. The LENA Language Autism Screen (LLAS) may offer an additional tool to disambiguate ASD from D/HH in young children. Materials and Methods: We examine empirical reports that use automatic vocal analysis methods to differentiate disordered from typically developing children. Results: Consensus across the sampled scientific literature shows support for use of automatic methods for screening and disambiguation of children with ASD and D/HH. There is some evidence of vocal differentiation between ASD, D/HH, and typically-developing children warranting use of the LLAS, but additional empirical evidence is needed to better understand the strengths and weaknesses of the tool. Conclusions: The findings reported here warrant further, more substantive, methodologically-sound research that is fully powered to show a reliable difference. Findings may be useful for both clinicians and researchers in better identification and understanding of communication disorders

    Acoustical Society of America - Oct 2014

    No full text

    Descriptive statistics of acoustic features of the stimuli.

    No full text
    <p>Descriptive statistics of acoustic features of the stimuli.</p

    Linguistic Input, Electronic Media, and Communication Outcomes of Toddlers with Hearing Loss

    No full text
    OBJECTIVES: The objectives of this study were to examine the quantity of adult words, adult-child conversational turns, and electronic media in the auditory environments of toddlers who are hard of hearing (HH) and to examine whether these variables contributed to variability in children’s communication outcomes. DESIGN: Participants were 28 children with mild to severe hearing loss. Full-day recordings of children’s auditory environments were collected within 6 months of their 2(nd) birthdays by utilizing LENA (Language ENvironment Analysis) technology. The system analyzes full-day acoustic recordings, yielding estimates of the quantity of adult words, conversational turns, and electronic media exposure in the recordings. Children’s communication outcomes were assessed via the receptive and expressive scales of the Mullen Scales of Early Learning at 2 years of age and the Comprehensive Assessment of Spoken Language at 3 years of age. RESULTS: On average, the HH toddlers were exposed to approximately 1400 adult words per hour and participated in approximately 60 conversational turns per hour. An average of 8% of each recording was classified as electronic media. However, there was considerable within-group variability on all three measures. Frequency of conversational turns, but not adult words, was positively associated with children’s communication outcomes at 2 and 3 years of age. Amount of electronic media exposure was negatively associated with 2-year-old receptive language abilities; however, regression results indicate that the relationship was fully mediated by the quantity of conversational turns. CONCLUSIONS: HH toddlers who were engaged in more conversational turns demonstrated stronger linguistic outcomes than HH toddlers who were engaged in fewer conversational turns. The frequency of these interactions was found to be decreased in households with high rates of electronic media exposure. Optimal language-learning environments for HH toddlers include frequent linguistic interactions between parents and children. To support this goal, parents should be encouraged to reduce their children’s exposure to electronic media

    Cross-classification Totals.

    No full text
    <p>Cross-classification Totals.</p

    Quantity of Parental Language in the Home Environments of Hard-of-Hearing 2-Year-Olds

    No full text
    Automated analyses of full-day recordings were used to determine whether young children who are hard-of-hearing (HH) received similar levels of exposure to adult words and conversational interactions as age-matched peers with normal-hearing (NH). Differences in adult input between children in this study and in a normative database were considered. Finally, factors were examined that may have contributed to individual differences in the input characteristics of families. Results indicated that the NH and HH groups were exposed to similar numbers of adult words and conversational turns. However, both the NH and HH groups were exposed to more adult words and engaged in more conversational turns than the NH children in the normative sample. Considering only the HH group, both quantity of adult words and conversational exchanges were correlated with children’s auditory characteristics. Children’s receptive language ability was correlated with conversational exchanges but not with adult word counts

    Point vowel duration in children with hearing aids and cochlear implants at 4 and 5 years of age

    No full text
    This work investigates the developmental aspects of the duration of point vowels in children with normal hearing compared with those with hearing aids and cochlear implants at 4 and 5 years of age. Younger children produced longer vowels than older children, and children with hearing loss (HL) produced longer and more variable vowels than their normal hearing peers. In this study, children with hearing aids and cochlear implants did not perform differently from each other. Test age and HL did not interact, indicating parallel but delayed development in children with HL compared with their typically developing peers. Variability was found to be concentrated among the high vowels / , / but not in the low vowels / , /. The broad findings of this work are consistent with previous reports and contribute a detailed description of point vowel duration not in the literature
    corecore