4,773 research outputs found

    Non-adjacent dependency learning in infancy, and its link to language development

    Get PDF
    To acquire language, infants must learn how to identify words and linguistic structure in speech. Statistical learning has been suggested to assist both of these tasks. However, infants’ capacity to use statistics to discover words and structure together remains unclear. Further, it is not yet known how infants’ statistical learning ability relates to their language development. We trained 17-month-old infants on an artificial language comprising non-adjacent dependencies, and examined their looking times on tasks assessing sensitivity to words and structure using an eye-tracked head-turn-preference paradigm. We measured infants’ vocabulary size using a Communicative Development Inventory (CDI) concurrently and at 19, 21, 24, 25, 27, and 30 months to relate performance to language development. Infants could segment the words from speech, demonstrated by a significant difference in looking times to words versus part-words. Infants’ segmentation performance was significantly related to their vocabulary size (receptive and expressive) both currently, and over time (receptive until 24 months, expressive until 30 months), but was not related to the rate of vocabulary growth. The data also suggest infants may have developed sensitivity to generalised structure, indicating similar statistical learning mechanisms may contribute to the discovery of words and structure in speech, but this was not related to vocabulary size

    A toolbox for animal call recognition

    Get PDF
    Monitoring the natural environment is increasingly important as habit degradation and climate change reduce theworld’s biodiversity.We have developed software tools and applications to assist ecologists with the collection and analysis of acoustic data at large spatial and temporal scales.One of our key objectives is automated animal call recognition, and our approach has three novel attributes. First, we work with raw environmental audio, contaminated by noise and artefacts and containing calls that vary greatly in volume depending on the animal’s proximity to the microphone. Second, initial experimentation suggested that no single recognizer could dealwith the enormous variety of calls. Therefore, we developed a toolbox of generic recognizers to extract invariant features for each call type. Third, many species are cryptic and offer little data with which to train a recognizer. Many popular machine learning methods require large volumes of training and validation data and considerable time and expertise to prepare. Consequently we adopt bootstrap techniques that can be initiated with little data and refined subsequently. In this paper, we describe our recognition tools and present results for real ecological problems

    Encoding of phonology in a recurrent neural model of grounded speech

    Full text link
    We study the representation and encoding of phonemes in a recurrent neural network model of grounded speech. We use a model which processes images and their spoken descriptions, and projects the visual and auditory representations into the same semantic space. We perform a number of analyses on how information about individual phonemes is encoded in the MFCC features extracted from the speech signal, and the activations of the layers of the model. Via experiments with phoneme decoding and phoneme discrimination we show that phoneme representations are most salient in the lower layers of the model, where low-level signals are processed at a fine-grained level, although a large amount of phonological information is retain at the top recurrent layer. We further find out that the attention mechanism following the top recurrent layer significantly attenuates encoding of phonology and makes the utterance embeddings much more invariant to synonymy. Moreover, a hierarchical clustering of phoneme representations learned by the network shows an organizational structure of phonemes similar to those proposed in linguistics.Comment: Accepted at CoNLL 201

    Connectionist modelling of lexical segmentation and vocabulary acquisition

    Get PDF
    Adults typically hear sentences in their native language as a sequence of separate words and we might therefeore assume, that words in speech are physically separated in the way that they are perceived. However, when listening to an unfamiliar language we no longer experience sequences of discrete words, but rather hear a continuous stream of speech with boundaries separating individual sentences or utterances. Theories of how adult listeners segment the speech stream into words emphasise the role that knowledge of individual words plays in the segmentation of speech. However, since words can not be learnt until the speech stream can be segmented, it seems unlikely that infants will be able to use word recognition to segment connected speech. For this reason, researchers have proposed a variety of strategies and cues that infants could use to identify word boundaries without being able to recognise the words that these boundaries delimit. This chapter, describes some computational simulations proposing ways in which these cues and strategies for the acquisition of lexical segmentation can be integrated with the infantsÂ’ acquisition of the meanings of words. The simulations reported here describe simple computational mechanisms and knowledge sources that may support these different aspects of language acquisition

    Mispronunciation Detection in Children's Reading of Sentences

    Get PDF
    This work proposes an approach to automatically parse children’s reading of sentences by detecting word pronunciations and extra content, and to classify words as correctly or incorrectly pronounced. This approach can be directly helpful for automatic assessment of reading level or for automatic reading tutors, where a correct reading must be identified. We propose a first segmentation stage to locate candidate word pronunciations based on allowing repetitions and false starts of a word’s syllables. A decoding grammar based solely on syllables allows silence to appear during a word pronunciation. At a second stage, word candidates are classified as mispronounced or not. The feature that best classifies mispronunciations is found to be the log-likelihood ratio between a free phone loop and a word spotting model in the very close vicinity of the candidate segmentation. Additional features are combined in multi-feature models to further improve classification, including: normalizations of the log-likelihood ratio, derivations from phone likelihoods, and Levenshtein distances between the correct pronunciation and recognized phonemes through two phoneme recognition approaches. Results show that most extra events were detected (close to 2% word error rate achieved) and that using automatic segmentation for mispronunciation classification approaches the performance of manual segmentation. Although the log-likelihood ratio from a spotting approach is already a good metric to classify word pronunciations, the combination of additional features provides a relative reduction of the miss rate of 18% (from 34.03% to 27.79% using manual segmentation and from 35.58% to 29.35% using automatic segmentation, at constant 5% false alarm rate).info:eu-repo/semantics/publishedVersio

    Automatic analysis of speech F0 contour for the characterization of mood changes in bipolar patients

    Get PDF
    da inserireBipolar disorders are characterized by a mood swing, ranging from mania to depression. A system that could monitor and eventually predict these changes would be useful to improve therapy and avoid dangerous events. Speech might convey relevant information about subjects' mood and there is a growing interest to study its changes in presence of mood disorders. In this work we present an automatic method to characterize fundamental frequency (F0) dynamics in voiced part of syllables. The method performs a segmentation of voiced sounds from running speech samples and estimates two categories of features. The first category is borrowed from Taylor's Tilt intonational model. However, the meaning of the proposed features is different from the meaning of Taylor's ones since the former are estimated from all voiced segments without performing any analysis of intonation. A second category of features takes into account the speed of change of F0. In this work, the proposed features are first estimated from an emotional speech database. Then, an analysis on speech samples acquired from eleven psychiatric patients experiencing different mood states, and eighteen healthy control subjects is introduced. Subjects had to perform a text reading task and a picture commenting task. The results of the analysis on the emotional speech database indicate that the proposed features can discriminate between high and low arousal emotions. This was verified both at single subject and group level. An intra-subject analysis was performed on bipolar patients and it highlighted significant changes of the features with different mood states, although this was not observed for all the subjects. The directions of the changes estimated for different patients experiencing the same mood swing, were not coherent and were task-dependent. Interestingly, a single-subject analysis performed on healthy controls and on bipolar patients recorded twice with the same mood label, resulted in a very small number of significant differences. In particular a very good specificity was highlighted for the Taylor-inspired features and for a subset of the second category of features, thus strengthening the significance of the results obtained with patients. Even if the number of enrolled patients is small, this work suggests that the proposed features might give a relevant contribution to the demanding research field of speech-based mood classifiers. Moreover, the results here presented indicate that a model of speech changes in bipolar patients might be subject-specific and that a richer characterization of subject status could be necessary to explain the observed variability
    • …
    corecore