39,924 research outputs found

    Eye-tracking measurements of language processing: developmental differences in children at high risk for ASD

    Full text link
    To explore how being at high risk for autism spectrum disorder (ASD), based on having an older sibling diagnosed with ASD, affects word comprehension and language processing speed, 18-, 24- and 36-month-old children, at high and low risk for ASD were tested in a cross- sectional study, on an eye gaze measure of receptive language that measured how accurately and rapidly the children looked at named target images. There were no significant differences between the high risk ASD group and the low risk control group of 18- and 24-month-olds. However, 36-month-olds in the high risk for ASD group performed significantly worse on the accuracy measure, but not on the speed measure. We propose that the language processing efficiency of the high risk group is not compromised, but other vocabulary acquisition factors might have lead to the high risk 36-month-olds to comprehend significantly fewer nouns on our measure.K01 DC013306 - NIDCD NIH HHS; R01 DC010290 - NIDCD NIH HHS; K01DC013306 - NIDCD NIH HHS; R01 DC 10290 - NIDCD NIH HH

    Multilingual Speech Recognition With A Single End-To-End Model

    Full text link
    Training a conventional automatic speech recognition (ASR) system to support multiple languages is challenging because the sub-word unit, lexicon and word inventories are typically language specific. In contrast, sequence-to-sequence models are well suited for multilingual ASR because they encapsulate an acoustic, pronunciation and language model jointly in a single network. In this work we present a single sequence-to-sequence ASR model trained on 9 different Indian languages, which have very little overlap in their scripts. Specifically, we take a union of language-specific grapheme sets and train a grapheme-based sequence-to-sequence model jointly on data from all languages. We find that this model, which is not explicitly given any information about language identity, improves recognition performance by 21% relative compared to analogous sequence-to-sequence models trained on each language individually. By modifying the model to accept a language identifier as an additional input feature, we further improve performance by an additional 7% relative and eliminate confusion between different languages.Comment: Accepted in ICASSP 201

    Recognition times for 54 thousand Dutch words : data from the Dutch crowdsourcing project

    Get PDF
    We present a new database of Dutch word recognition times for a total of 54 thousand words, called the Dutch Crowdsourcing Project. The data were collected with an Internet vocabulary test. The database is limited to native Dutch speakers. Participants were asked to indicate which words they knew. Their response times were registered, even though the participants were not asked to respond as fast as possible. Still, the response times correlate around .7 with the response times of the Dutch Lexicon Projects for shared words. Also results of virtual experiments indicate that the new response times are a valid addition to the Dutch Lexicon Projects. This not only means that we have useful response times for some 20 thousand extra words, but we now also have data on differences in response latencies as a function of education and age. The new data correspond better to word use in the Netherlands

    Integrated speech and morphological processing in a connectionist continuous speech understanding for Korean

    Full text link
    A new tightly coupled speech and natural language integration model is presented for a TDNN-based continuous possibly large vocabulary speech recognition system for Korean. Unlike popular n-best techniques developed for integrating mainly HMM-based speech recognition and natural language processing in a {\em word level}, which is obviously inadequate for morphologically complex agglutinative languages, our model constructs a spoken language system based on a {\em morpheme-level} speech and language integration. With this integration scheme, the spoken Korean processing engine (SKOPE) is designed and implemented using a TDNN-based diphone recognition module integrated with a Viterbi-based lexical decoding and symbolic phonological/morphological co-analysis. Our experiment results show that the speaker-dependent continuous {\em eojeol} (Korean word) recognition and integrated morphological analysis can be achieved with over 80.6% success rate directly from speech inputs for the middle-level vocabularies.Comment: latex source with a4 style, 15 pages, to be published in computer processing of oriental language journa
    corecore