24 research outputs found

    Entropy as an Indicator of Context Boundaries: An Experiment Using a Web Search Engine

    No full text

    Interactions between statistical and semantic information in infant language development

    No full text
    Infants can use statistical regularities to form rudimentary word categories (e.g. noun, verb), and to learn the meanings common to words from those categories. Using an artificial language methodology, we probed the mechanisms by which two types of statistical cues (distributional and phonological regularities) affect word learning. Because linking distributional cues vs. phonological information to semantics make different computational demands on learners, we also tested whether their use is related to language proficiency. We found that 22-month-old infants with smaller vocabularies generalized using phonological cues; however, infants with larger vocabularies showed the opposite pattern of results, generalizing based on distributional cues. These findings suggest that both phonological and distributional cues marking word categories promote early word learning. Moreover, while correlations between these cues are important to forming word categories, we found infants’ weighting of these cues in subsequent word-learning tasks changes over the course of early language development

    Can Infants Map Meaning to Newly Segmented Words?

    No full text

    Isolated words enhance statistical language learning in infancy

    No full text
    Infants are adept at tracking statistical regularities to identify word boundaries in pause-free speech. However, researchers have questioned the relevance of statistical learning mechanisms to language acquisition, since previous studies have used simplified artificial languages that ignore the variability of real language input. The experiments reported here embraced a key dimension of variability in infant-directed speech. English-learning infants (8-10 months) listened briefly to natural Italian speech that contained either fluent speech only or a combination of fluent speech and single-word utterances. Listening times revealed successful learning of the statistical properties of target words only when words appeared both in fluent speech and in isolation; brief exposure to fluent speech alone was not sufficient to facilitate detection of the words' statistical properties. This investigation suggests that statistical learning mechanisms actually benefit from variability in utterance length, and provides the first evidence that isolated words and longer utterances act in concert to support infant word segmentation

    Linking sounds to meanings: Infant statistical learning in a natural language.

    No full text
    The processes of infant word segmentation and infant word learning have largely been studied separately. However, the ease with which potential word forms are segmented from fluent speech seems likely to influence subsequent mappings between words and their referents. To explore this process, we tested the link between the statistical coherence of sequences presented in fluent speech and infants’ subsequent use of those sequences as labels for novel objects. Notably, the materials were drawn from a natural language unfamiliar to the infants (Italian). The results of three experiments suggest that there is a close relationship between the statistics of the speech stream and subsequent mapping of labels to referents. Mapping was facilitated when the labels contained high transitional probabilities in the forward and/or backward direction (Experiment 1). When no transitional probability information was available (Experiment 2), or when the internal transitional probabilities of the labels were low in both directions (Experiment 3), infants failed to link the labels to their referents. Word learning appears to be strongly influenced by infants’ prior experience with the distribution of sounds that make up words in natural languages

    Melodic Grouping in Music Information Retrieval: New Methods and Applications

    No full text
    We introduce the MIR task of segmenting melodies into phrases, summarise the musicological and psychological background to the task and review existing computational methods before presenting a new model, IDyOM, for melodic segmentation based on statistical learning and information-dynamic analysis. The performance of the model is compared to several existing algorithms in predicting the annotated phrase boundaries in a large corpus of folk music. The results indicate that four algorithms produce acceptable results: one of these is the IDyOM model which performs much better than naive statistical models and approaches the performance of the best-performing rule-based models. Further slight performance improvement can be obtained by combining the output of the four algorithms in a hybrid model, although the performance of this model is moderate at best, leaving a great deal of room for improvement on this task
    corecore