8,694 research outputs found

    An acoustic analysis of labialization of coronal nasal consonants in American English

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2005.Includes bibliographical references (leaves 51-54).A challenge for speech recognition models is to account for the variation between natural connected speech forms and the canonical forms of the lexicon. This study focuses on one particular sound change common in conversational speech, in which word-final coronal nasal consonants undergo place assimilation toward following word-initial labial consonants. Formant frequency measurements were taken from words ending with coronal nasal consonants in potentially assimilating sentence contexts, and identical words ending in labial nasal consonants, across vowel contexts. The frequency of the second formant at vowel offset and during nasal closure was found to be sufficient to discriminate between underlying forms. There was evidence that even strongly-assimilated coronal segments differ on the basis of these cues from their pure labial counterparts. It is hypothesized that listeners can use these acoustic cues to uncover the intended place of articulation of assimilated segments, without recourse to phonological inference or sentence context.by Elisabeth A. Hon.S.M

    Does training with amplitude modulated tones affect tone-vocoded speech perception?

    Get PDF
    Temporal-envelope cues are essential for successful speech perception. We asked here whether training on stimuli containing temporal-envelope cues without speech content can improve the perception of spectrally-degraded (vocoded) speech in which the temporal-envelope (but not the temporal fine structure) is mainly preserved. Two groups of listeners were trained on different amplitude-modulation (AM) based tasks, either AM detection or AM-rate discrimination (21 blocks of 60 trials during two days, 1260 trials; frequency range: 4Hz, 8Hz, and 16Hz), while an additional control group did not undertake any training. Consonant identification in vocoded vowel-consonant-vowel stimuli was tested before and after training on the AM tasks (or at an equivalent time interval for the control group). Following training, only the trained groups showed a significant improvement in the perception of vocoded speech, but the improvement did not significantly differ from that observed for controls. Thus, we do not find convincing evidence that this amount of training with temporal-envelope cues without speech content provide significant benefit for vocoded speech intelligibility. Alternative training regimens using vocoded speech along the linguistic hierarchy should be explored

    Verification of feature regions for stops and fricatives in natural speech

    Get PDF
    The presence of acoustic cues and their importance in speech perception have long remained debatable topics. In spite of several studies that exist in this eld, very little is known about what exactly humans perceive in speech. This research takes a novel approach towards understanding speech perception. A new method, named three-dimensional deep search (3DDS), was developed to explore the perceptual cues of 16 consonant-vowel (CV) syllables, namely /pa/, /ta/, /ka/, /ba/, /da/, /ga/, /fa/, /Ta/, /sa/, /Sa/, /va/, /Da/, /za/, /Za/, from naturally produced speech. A veri cation experiment was then conducted to further verify the ndings of the 3DDS method. For this pur- pose, the time-frequency coordinate that de nes each CV was ltered out using the short-time Fourier transform (STFT), and perceptual tests were then conducted. A comparison between unmodi ed speech sounds and those without the acoustic cues was made. In most of the cases, the scores dropped from 100% to chance levels even at 12 dB SNR. This clearly emphasizes the importance of features in identifying each CV. The results con rm earlier ndings that stops are characterized by a short-duration burst preceding the vowel by 10 cs in the unvoiced case, and appearing almost coincident with the vowel in the voiced case. As has been previously hypothesized, we con rmed that the F2 transition plays no signi cant role in consonant identi cation. 3DDS analysis labels the /sa/ and /za/ perceptual features as an intense frication noise around 4 kHz, preceding the vowel by 15{20 cs, with the /za/ feature being around 5 cs shorter in duration than that of /sa/; the /Sa/ and /Za/ events are found to be frication energy near 2 kHz, preceding the vowel by 17{20 cs. /fa/ has a relatively weak burst and frication energy over a wide-band including 2{6 kHz, while /va/ has a cue in the 1.5 kHz mid-frequency region preceding the vowel by 7{10 cs. New information is established regarding /Da/ and /Ta/, especially with regards to the nature of their signi cant confusions

    Mandarin speech perception in combined electric and acoustic stimulation.

    Get PDF
    For deaf individuals with residual low-frequency acoustic hearing, combined use of a cochlear implant (CI) and hearing aid (HA) typically provides better speech understanding than with either device alone. Because of coarse spectral resolution, CIs do not provide fundamental frequency (F0) information that contributes to understanding of tonal languages such as Mandarin Chinese. The HA can provide good representation of F0 and, depending on the range of aided acoustic hearing, first and second formant (F1 and F2) information. In this study, Mandarin tone, vowel, and consonant recognition in quiet and noise was measured in 12 adult Mandarin-speaking bimodal listeners with the CI-only and with the CI+HA. Tone recognition was significantly better with the CI+HA in noise, but not in quiet. Vowel recognition was significantly better with the CI+HA in quiet, but not in noise. There was no significant difference in consonant recognition between the CI-only and the CI+HA in quiet or in noise. There was a wide range in bimodal benefit, with improvements often greater than 20 percentage points in some tests and conditions. The bimodal benefit was compared to CI subjects' HA-aided pure-tone average (PTA) thresholds between 250 and 2000 Hz; subjects were divided into two groups: "better" PTA (<50 dB HL) or "poorer" PTA (>50 dB HL). The bimodal benefit differed significantly between groups only for consonant recognition. The bimodal benefit for tone recognition in quiet was significantly correlated with CI experience, suggesting that bimodal CI users learn to better combine low-frequency spectro-temporal information from acoustic hearing with temporal envelope information from electric hearing. Given the small number of subjects in this study (n = 12), further research with Chinese bimodal listeners may provide more information regarding the contribution of acoustic and electric hearing to tonal language perception

    Language identification with suprasegmental cues: A study based on speech resynthesis

    Get PDF
    This paper proposes a new experimental paradigm to explore the discriminability of languages, a question which is crucial to the child born in a bilingual environment. This paradigm employs the speech resynthesis technique, enabling the experimenter to preserve or degrade acoustic cues such as phonotactics, syllabic rhythm or intonation from natural utterances. English and Japanese sentences were resynthesized, preserving broad phonotactics, rhythm and intonation (Condition 1), rhythm and intonation (Condition 2), intonation only (Condition 3), or rhythm only (Condition 4). The findings support the notion that syllabic rhythm is a necessary and sufficient cue for French adult subjects to discriminate English from Japanese sentences. The results are consistent with previous research using low-pass filtered speech, as well as with phonological theories predicting rhythmic differences between languages. Thus, the new methodology proposed appears to be well-suited to study language discrimination. Applications for other domains of psycholinguistic research and for automatic language identification are considered

    Contributions of temporal encodings of voicing, voicelessness, fundamental frequency, and amplitude variation to audiovisual and auditory speech perception

    Get PDF
    Auditory and audio-visual speech perception was investigated using auditory signals of invariant spectral envelope that temporally encoded the presence of voiced and voiceless excitation, variations in amplitude envelope and F-0. In experiment 1, the contribution of the timing of voicing was compared in consonant identification to the additional effects of variations in F-0 and the amplitude of voiced speech. In audio-visual conditions only, amplitude variation slightly increased accuracy globally and for manner features. F-0 variation slightly increased overall accuracy and manner perception in auditory and audio-visual conditions. Experiment 2 examined consonant information derived from the presence and amplitude variation of voiceless speech in addition to that from voicing, F-0, and voiced speech amplitude. Binary indication of voiceless excitation improved accuracy overall and for voicing and manner. The amplitude variation of voiceless speech produced only a small increment in place of articulation scores. A final experiment examined audio-visual sentence perception using encodings of voiceless excitation and amplitude variation added to a signal representing voicing and F-0. There was a contribution of amplitude variation to sentence perception, but not of voiceless excitation. The timing of voiced and voiceless excitation appears to be the major temporal cues to consonant identity. (C) 1999 Acoustical Society of America. [S0001-4966(99)01410-1]

    Loanword adaptation as first-language phonological perception

    Get PDF
    We show that loanword adaptation can be understood entirely in terms of phonological and phonetic comprehension and production mechanisms in the first language. We provide explicit accounts of several loanword adaptation phenomena (in Korean) in terms of an Optimality-Theoretic grammar model with the same three levels of representation that are needed to describe L1 phonology: the underlying form, the phonological surface form, and the auditory-phonetic form. The model is bidirectional, i.e., the same constraints and rankings are used by the listener and by the speaker. These constraints and rankings are the same for L1 processing and loanword adaptation

    Norwegian retroflexion : licensing by cue or prosody?

    Get PDF
    A common topic in recent literature on phonology is the question of whether phonological processes and segments are licensed by prosodic position or by perceptual cues. The former is the traditional view, as represented by e.g. Lombardi (1995) and Beckman (1998), and holds that segments occur in specific prosodic positions such as the coda. In a licensing by cue approach, as represented by Steriade (1995, 1999), on the other hand, segments are assumed to occur in those positions only where their perceptual cues are prominent, independent of the prosodic position. In positions where the cues are not salient, neutralization occurs
    corecore