1,438 research outputs found

    THE USE OF SEGMENTATION CUES IN SECOND LANGUAGE LEARNERS OF ENGLISH

    Get PDF
    This dissertation project examined the influence of language typology on the use of segmentation cues by second language (L2) learners of English. Previous research has shown that native English speakers rely more on sentence context and lexical knowledge than segmental (i.e. phonotactics or acoustic-phonetics) or prosodic cues (e.g., word stresss) in native language (L1) segmentation. However, L2 learners may rely more on segmental and prosodic cues to identify word boundaries in L2 speech since it may require high lexical and syntactic proficiency in order to use lexical cues efficiently. The goal of this dissertation was to provide empirical evidence for the Revised Framework for L2 Segmentation (RFL2) which describes the relative importance of different levels of segmentation cues. Four experiments were carried out to test the hypotheses made by RFL2. Participants consisted of four language groups including native English speakers and L2 learners of English with Mandarin, Korean, or Spanish L1s. Experiment 1 compared the use of stress cues and lexical knowledge while Experiment 2 compared the use of phonotactic cues and lexical knowledge. Experiment 3 compared the use of phonotactic cues and semantic cues while Experiment 4 compared the use of stress cues and sentence context. Results showed that L2 learners rely more on segmental cues than lexical knowledge or semantic cues. L2 learners showed cue interaction in both lexical and sublexical levels whereas native speakers appeared to use the cues independently. In general, L2 learners appeared to have acquired sensitivity to the segmentation cues used in L2, although they still showed difficulty with specific aspects in each cue based on L1 characteristics. The results provided partial support for RFL2 in which L2 learners' use of sublexical cues was influenced by L1 typology. The current dissertation has important pedagogical implication as findings may help identify cues that can facilitate L2 speech segmentation and comprehension

    Infants segment words from songs - an EEG study

    No full text
    Children’s songs are omnipresent and highly attractive stimuli in infants’ input. Previous work suggests that infants process linguistic–phonetic information from simplified sung melodies. The present study investigated whether infants learn words from ecologically valid children’s songs. Testing 40 Dutch-learning 10-month-olds in a familiarization-then-test electroencephalography (EEG) paradigm, this study asked whether infants can segment repeated target words embedded in songs during familiarization and subsequently recognize those words in continuous speech in the test phase. To replicate previous speech work and compare segmentation across modalities, infants participated in both song and speech sessions. Results showed a positive event-related potential (ERP) familiarity effect to the final compared to the first target occurrences during both song and speech familiarization. No evidence was found for word recognition in the test phase following either song or speech. Comparisons across the stimuli of the present and a comparable previous study suggested that acoustic prominence and speech rate may have contributed to the polarity of the ERP familiarity effect and its absence in the test phase. Overall, the present study provides evidence that 10-month-old infants can segment words embedded in songs, and it raises questions about the acoustic and other factors that enable or hinder infant word segmentation from songs and speech

    The listening talker: A review of human and algorithmic context-induced modifications of speech

    Get PDF
    International audienceSpeech output technology is finding widespread application, including in scenarios where intelligibility might be compromised - at least for some listeners - by adverse conditions. Unlike most current algorithms, talkers continually adapt their speech patterns as a response to the immediate context of spoken communication, where the type of interlocutor and the environment are the dominant situational factors influencing speech production. Observations of talker behaviour can motivate the design of more robust speech output algorithms. Starting with a listener-oriented categorisation of possible goals for speech modification, this review article summarises the extensive set of behavioural findings related to human speech modification, identifies which factors appear to be beneficial, and goes on to examine previous computational attempts to improve intelligibility in noise. The review concludes by tabulating 46 speech modifications, many of which have yet to be perceptually or algorithmically evaluated. Consequently, the review provides a roadmap for future work in improving the robustness of speech output

    N200 and P300 Evoked by Stimuli Straddling Category Boundary in Lexical Context

    Get PDF
    Background and Objectives: Event-related potentials (ERPs) like N200 and P300 have been reported to reflect the categorical perception of speech. The purpose of the present study is to explore whether these ERP components reflect the influence of lexical context on categorical perception. Findings may provide evidence for bottom-up or top-down processing of speech. Methods: On a seven step series of the /bi/-/pi/ continuum, a two-forced choice labeling test was administered in two conditions: /bi/ context (e.g. bee sting) and /pi/ context (e.g. pea soup). From the labeling results, Stimulus 1which is a prototypical /bi/ was selected for standard stimuli, and Stimulus 4 which showed the greatest effect of context of the between-category was selected as the deviant in an active oddball paradigm commonly used to obtain N200 and P300 ERPs. After subjects finished a two-forced choice labeling test, they participated in electrophysiological testing while simultaneously pressing a response button when they heard the deviant stimuli. A total of 450 stimuli composed of 369 standard stimuli (81%) and 81 deviant stimuli (19%) were presented in an active oddball paradigm for /bi/ and /pi/ context word conditions, respectively. ERP responses were measured using 9 electrodes from 21 normal hearing adults. Electrophysiological data (amplitude and latency of N200 and P300) and behavioral data (labeling, discrimination response accuracy, discrimination reaction time) were analyzed. Results: (1) The amplitude and latency of N200 and P300 did not represent the change in categorical perception in the presence of lexical context that was demonstrated in the labeling task, (2) N200 amplitude was the largest over the frontal region while P300 was the largest over the parietal region, (3) discrimination reaction time was faster in the /pi/ context condition than /bi/ context condition while response accuracy did not differ with context, and (4) there was no correlation between N200/P300 and behavioral data. Conclusion: N200 and P300 do not reflect the lexical effect on pre-lexical processing of categorical speech stimuli. Findings suggest that lexical context does not affect electrophysiological measures of pre-lexical speech processing (e.g., N200 or P300), supporting the autonomous (bottom-up) model that speech perception influenced by lexical context (as demonstrated behaviorally) occurs on the post-lexical level

    Continuous-speech segmentation at the beginning of language acquisition: Electrophysiological evidence

    Get PDF
    Word segmentation, or detecting word boundaries in continuous speech, is not an easy task. Spoken language does not contain silences to indicate word boundaries and words partly overlap due to coarticalution. Still, adults listening to their native language perceive speech as individual words. They are able to combine different distributional cues in the language, such as the statistical distribution of sounds and metrical cues, with lexical information, to efficiently detect word boundaries. Infants in the first year of life do not command these cues. However, already between seven and ten months of age, before they know word meaning, infants learn to segment words from speech. This important step in language acquisition is the topic of this dissertation. In chapter 2, the first Event Related Brain Potential (ERP) study on word segmentation in Dutch ten-month-olds is discussed. The results show that ten-month-olds can already segment words with a strong-weak stress pattern from speech and they need roughly the first half of a word to do so. Chapter 3 deals with segmentation of words beginning with a weak syllable, as a considerable number of words in Dutch do not follow the predominant strong-weak stress pattern. The results show that ten-month-olds still largely rely on the strong syllable in the language, and do not show an ERP response to the initial weak syllable. In chapter 4, seven-month-old infants' segmentation of strong-weak words was studied. An ERP response was found to strong-weak words presented in sentences. However, a behavioral response was not found in an additional Headturn Preference Procedure study. There results suggest that the ERP response is a precursor to the behavioral response that infants show at a later age

    On The Way To Linguistic Representation: Neuromagnetic Evidence of Early Auditory Abstraction in the Perception of Speech and Pitch

    Get PDF
    The goal of this dissertation is to show that even at the earliest (non-invasive) recordable stages of auditory cortical processing, we find evidence that cortex is calculating abstract representations from the acoustic signal. Looking across two distinct domains (inferential pitch perception and vowel normalization), I present evidence demonstrating that the M100, an automatic evoked neuromagnetic component that localizes to primary auditory cortex is sensitive to abstract computations. The M100 typically responds to physical properties of the stimulus in auditory and speech perception and integrates only over the first 25 to 40 ms of stimulus onset, providing a reliable dependent measure that allows us to tap into early stages of auditory cortical processing. In Chapter 2, I briefly present the episodicist position on speech perception and discuss research indicating that the strongest episodicist position is untenable. I then review findings from the mismatch negativity literature, where proposals have been made that the MMN allows access into linguistic representations supported by auditory cortex. Finally, I conclude the Chapter with a discussion of the previous findings on the M100/N1. In Chapter 3, I present neuromagnetic data showing that the re-sponse properties of the M100 are sensitive to the missing fundamental component using well-controlled stimuli. These findings suggest that listeners are reconstructing the inferred pitch by 100 ms after stimulus onset. In Chapter 4, I propose a novel formant ratio algorithm in which the third formant (F3) is the normalizing factor. The goal of formant ratio proposals is to provide an explicit algorithm that successfully "eliminates" speaker-dependent acoustic variation of auditory vowel tokens. Results from two MEG experiments suggest that auditory cortex is sensitive to formant ratios and that the perceptual system shows heightened sensitivity to tokens located in more densely populated regions of the vowel space. In Chapter 5, I report MEG results that suggest early auditory cortical processing is sensitive to violations of a phonological constraint on sound sequencing, suggesting that listeners make highly specific, knowledge-based predictions about rather abstract anticipated properties of the upcoming speech signal and violations of these predictions are evident in early cortical processing

    Speech Communication

    Get PDF
    Contains table of contents for Part V, table of contents for Section 1, reports on six research projects and a list of publications.C.J. Lebel FellowshipDennis Klatt Memorial FundNational Institutes of Health Grant R01-DC00075National Institutes of Health Grant R01-DC01291National Institutes of Health Grant R01-DC01925National Institutes of Health Grant R01-DC02125National Institutes of Health Grant R01-DC02978National Institutes of Health Grant R01-DC03007National Institutes of Health Grant R29-DC02525National Institutes of Health Grant F32-DC00194National Institutes of Health Grant F32-DC00205National Institutes of Health Grant T32-DC00038National Science Foundation Grant IRI 89-05249National Science Foundation Grant IRI 93-14967National Science Foundation Grant INT 94-2114

    Streaming Speech-to-Confusion Network Speech Recognition

    Full text link
    In interactive automatic speech recognition (ASR) systems, low-latency requirements limit the amount of search space that can be explored during decoding, particularly in end-to-end neural ASR. In this paper, we present a novel streaming ASR architecture that outputs a confusion network while maintaining limited latency, as needed for interactive applications. We show that 1-best results of our model are on par with a comparable RNN-T system, while the richer hypothesis set allows second-pass rescoring to achieve 10-20\% lower word error rate on the LibriSpeech task. We also show that our model outperforms a strong RNN-T baseline on a far-field voice assistant task.Comment: Submitted to Interspeech 202

    The Resonant Dynamics of Speech Perception: Interword Integration and Duration-Dependent Backward Effects

    Full text link
    How do listeners integrate temporally distributed phonemic information into coherent representations of syllables and words? During fluent speech perception, variations in the durations of speech sounds and silent pauses can produce different pereeived groupings. For exarnple, increasing the silence interval between the words "gray chip" may result in the percept "great chip", whereas increasing the duration of fricative noise in "chip" may alter the percept to "great ship" (Repp et al., 1978). The ARTWORD neural model quantitatively simulates such context-sensitive speech data. In AHTWORD, sequential activation and storage of phonemic items in working memory provides bottom-up input to unitized representations, or list chunks, that group together sequences of items of variable length. The list chunks compete with each other as they dynamically integrate this bottom-up information. The winning groupings feed back to provide top-down supportto their phonemic items. Feedback establishes a resonance which temporarily boosts the activation levels of selected items and chunks, thereby creating an emergent conscious percept. Because the resonance evolves more slowly than wotking memory activation, it can be influenced by information presented after relatively long intervening silence intervals. The same phonemic input can hereby yield different groupings depending on its arrival time. Processes of resonant transfer and competitive teaming help determine which groupings win the competition. Habituating levels of neurotransmitter along the pathways that sustain the resonant feedback lead to a resonant collapsee that permits the formation of subsequent. resonances.Air Force Office of Scientific Research (F49620-92-J-0225); Defense Advanced Research projects Agency and Office of Naval Research (N00014-95-1-0409); National Science Foundation (IRI-97-20333); Office of Naval Research (N00014-92-J-1309, NOOO14-95-1-0657
    corecore