284 research outputs found

    Use of second formant transition and relative amplitude cues in labeling nasal place of articulation

    Get PDF
    Previous studies have shown that manipulation of the amplitude of a particular frequency region of the consonantal portion of a syllable relative to the amplitude of the same frequency region in an adjacent vowel influences the perception of place of articulation. This manipulation has been called the relative amplitude cue. The earlier studies examined the effect of the relative amplitude manipulation upon labeling place of articulation for fricatives and stop consonants. This current study looked at the influences of this manipulation upon labeling place of articulation for the /m/ - /n/ nasal distinction. Twenty-five listeners with normal hearing labeled nasal place of articulation for the synthetic syllables. Results show an influence of both relative amplitude and formant transition manipulation upon labeling behavior. These results add further evidence to the importance of acoustic boundaries in processing consonant place of articulation

    Concatenative speech synthesis: a Framework for Reducing Perceived Distortion when using the TD-PSOLA Algorithm

    Get PDF
    This thesis presents the design and evaluation of an approach to concatenative speech synthesis using the Titne-Domain Pitch-Synchronous OverLap-Add (I'D-PSOLA) signal processing algorithm. Concatenative synthesis systems make use of pre-recorded speech segments stored in a speech corpus. At synthesis time, the `best' segments available to synthesise the new utterances are chosen from the corpus using a process known as unit selection. During the synthesis process, the pitch and duration of these segments may be modified to generate the desired prosody. The TD-PSOLA algorithm provides an efficient and essentially successful solution to perform these modifications, although some perceptible distortion, in the form of `buzzyness', may be introduced into the speech signal. Despite the popularity of the TD-PSOLA algorithm, little formal research has been undertaken to address this recognised problem of distortion. The approach in the thesis has been developed towards reducing the perceived distortion that is introduced when TD-PSOLA is applied to speech. To investigate the occurrence of this distortion, a psychoacoustic evaluation of the effect of pitch modification using the TD-PSOLA algorithm is presented. Subjective experiments in the form of a set of listening tests were undertaken using word-level stimuli that had been manipulated using TD-PSOLA. The data collected from these experiments were analysed for patterns of co- occurrence or correlations to investigate where this distortion may occur. From this, parameters were identified which may have contributed to increased distortion. These parameters were concerned with the relationship between the spectral content of individual phonemes, the extent of pitch manipulation, and aspects of the original recordings. Based on these results, a framework was designed for use in conjunction with TD-PSOLA to minimise the possible causes of distortion. The framework consisted of a novel speech corpus design, a signal processing distortion measure, and a selection process for especially problematic phonemes. Rather than phonetically balanced, the corpus is balanced to the needs of the signal processing algorithm, containing more of the adversely affected phonemes. The aim is to reduce the potential extent of pitch modification of such segments, and hence produce synthetic speech with less perceptible distortion. The signal processingdistortion measure was developed to allow the prediction of perceptible distortion in pitch-modified speech. Different weightings were estimated for individual phonemes,trained using the experimental data collected during the listening tests.The potential benefit of such a measure for existing unit selection processes in a corpus-based system using TD-PSOLA is illustrated. Finally, the special-case selection process was developed for highly problematic voiced fricative phonemes to minimise the occurrence of perceived distortion in these segments. The success of the framework, in terms of generating synthetic speech with reduced distortion, was evaluated. A listening test showed that the TD-PSOLA balanced speech corpus may be capable of generating pitch-modified synthetic sentences with significantly less distortion than those generated using a typical phonetically balanced corpus. The voiced fricative selection process was also shown to produce pitch-modified versions of these phonemes with less perceived distortion than a standard selection process. The listening test then indicated that the signal processing distortion measure was able to predict the resulting amount of distortion at the sentence-level after the application of TD-PSOLA, suggesting that it may be beneficial to include such a measure in existing unit selection processes. The framework was found to be capable of producing speech with reduced perceptible distortion in certain situations, although the effects seen at the sentence-level were less than those seen in the previous investigative experiments that made use of word-level stimuli. This suggeststhat the effect of the TD-PSOLA algorithm cannot always be easily anticipated due to the highly dynamic nature of speech, and that the reduction of perceptible distortion in TD-PSOLA-modified speech remains a challenge to the speech community

    The impact of spectrally asynchronous delay on the intelligibility of conversational speech

    Get PDF
    Conversationally spoken speech is rampant with rapidly changing and complex acoustic cues that individuals are able to hear, process, and encode to meaning. For many hearing-impaired listeners, a hearing aid is necessary to hear these spectral and temporal acoustic cues of speech. For listeners with mild-moderate high frequency sensorineural hearing loss, open-fit digital signal processing (DSP) hearing aids are the most common amplification option. Open-fit DSP hearing aids introduce a spectrally asynchronous delay to the acoustic signal by allowing audible low frequency information to pass to the eardrum unimpeded while the aid delivers amplified high frequency sounds to the eardrum that has a delayed onset relative to the natural pathway of sound. These spectrally asynchronous delays may disrupt the natural acoustic pattern of speech. The primary goal of this study is to measure the effect of spectrally asynchronous delay on the intelligibility of conversational speech by normal-hearing and hearing-impaired listeners. A group of normal-hearing listeners (n = 25) and listeners with mild-moderate high frequency sensorineural hearing loss (n = 25) participated in this study. The acoustic stimuli included 200 conversationally-spoken recordings of the low predictability sentences from the revised speech perception in noise test (r-SPIN). These 200 sentences were modified to control for audibility for the hearing-impaired group and so that the acoustic energy above 2 kHz was delayed by either 0 ms (control), 4ms, 8ms, or 32 ms relative to the low frequency energy. The data were analyzed in order to find the effect of each of the four delay conditions on the intelligibility of the final key word of each sentence. Normal-hearing listeners were minimally affected by the asynchronous delay. However, the hearing-impaired listeners were deleteriously affected by increasing amounts of spectrally asynchronous delay. Although the hearing-impaired listeners performed well overall in their perception of conversationally spoken speech in quiet, the intelligibility of conversationally spoken sentences significantly decreased when the delay values were equal to or greater than 4 ms. Therefore, hearing aid manufacturers need to restrict the amount of delay introduced by DSP so that it does not distort the acoustic patterns of conversational speech

    Interaction between source and filter in vowel identification

    Get PDF
    Speech sounds can be modeled as a product of two components: the source and the filter. The effects of filter manipulations on speech perception have been studied extensively, while the effects of source manipulations have been largely overlooked. This study was an attempt to assess the impact of source manipulations on vowel identification. To this end, two source manipulations were conducted prior to filtering. First, several harmonics of the source sawtooth wave that were located near formant peaks were mistuned, either towards or away from the peaks. Mistuning towards formant peaks was expected to facilitate vowel identification by helping to convey the position of the formant more clearly; mistuning away was expected to hinder performance. Consistent with this hypothesis, a significant effect of mistuning was observed. However, follow up analyses revealed that this manipulation only had an effect in conditions where harmonics are mistuned away from formant peaks by a large degree (5%). The second manipulation consisted of adding noise to the source signal to “fill in” the acoustic spectrum. Because the addition of noise occurred before filtering, the spectral shape of the noise component was identical to that of the harmonic portion of the tone, and was expected to help convey formant peaks, especially when they were not well conveyed by harmonic information. The results reveal that the addition of noise had no effect on vowel identification. Possible stimulus based explanations for the failure to observe some of the hypothesized effects are discussed

    Pitch continuity and speech source attribution.

    Full text link

    Concatenative speech synthesis : a framework for reducing perceived distortion when using the TD-PSOLA algorithm

    Get PDF
    This thesis presents the design and evaluation of an approach to concatenative speech synthesis using the Titne-Domain Pitch-Synchronous OverLap-Add (I'D-PSOLA) signal processing algorithm. Concatenative synthesis systems make use of pre-recorded speech segments stored in a speech corpus. At synthesis time, the `best' segments available to synthesise the new utterances are chosen from the corpus using a process known as unit selection. During the synthesis process, the pitch and duration of these segments may be modified to generate the desired prosody. The TD-PSOLA algorithm provides an efficient and essentially successful solution to perform these modifications, although some perceptible distortion, in the form of `buzzyness', may be introduced into the speech signal. Despite the popularity of the TD-PSOLA algorithm, little formal research has been undertaken to address this recognised problem of distortion. The approach in the thesis has been developed towards reducing the perceived distortion that is introduced when TD-PSOLA is applied to speech. To investigate the occurrence of this distortion, a psychoacoustic evaluation of the effect of pitch modification using the TD-PSOLA algorithm is presented. Subjective experiments in the form of a set of listening tests were undertaken using word-level stimuli that had been manipulated using TD-PSOLA. The data collected from these experiments were analysed for patterns of co- occurrence or correlations to investigate where this distortion may occur. From this, parameters were identified which may have contributed to increased distortion. These parameters were concerned with the relationship between the spectral content of individual phonemes, the extent of pitch manipulation, and aspects of the original recordings. Based on these results, a framework was designed for use in conjunction with TD-PSOLA to minimise the possible causes of distortion. The framework consisted of a novel speech corpus design, a signal processing distortion measure, and a selection process for especially problematic phonemes. Rather than phonetically balanced, the corpus is balanced to the needs of the signal processing algorithm, containing more of the adversely affected phonemes. The aim is to reduce the potential extent of pitch modification of such segments, and hence produce synthetic speech with less perceptible distortion. The signal processingdistortion measure was developed to allow the prediction of perceptible distortion in pitch-modified speech. Different weightings were estimated for individual phonemes,trained using the experimental data collected during the listening tests.The potential benefit of such a measure for existing unit selection processes in a corpus-based system using TD-PSOLA is illustrated. Finally, the special-case selection process was developed for highly problematic voiced fricative phonemes to minimise the occurrence of perceived distortion in these segments. The success of the framework, in terms of generating synthetic speech with reduced distortion, was evaluated. A listening test showed that the TD-PSOLA balanced speech corpus may be capable of generating pitch-modified synthetic sentences with significantly less distortion than those generated using a typical phonetically balanced corpus. The voiced fricative selection process was also shown to produce pitch-modified versions of these phonemes with less perceived distortion than a standard selection process. The listening test then indicated that the signal processing distortion measure was able to predict the resulting amount of distortion at the sentence-level after the application of TD-PSOLA, suggesting that it may be beneficial to include such a measure in existing unit selection processes. The framework was found to be capable of producing speech with reduced perceptible distortion in certain situations, although the effects seen at the sentence-level were less than those seen in the previous investigative experiments that made use of word-level stimuli. This suggeststhat the effect of the TD-PSOLA algorithm cannot always be easily anticipated due to the highly dynamic nature of speech, and that the reduction of perceptible distortion in TD-PSOLA-modified speech remains a challenge to the speech community.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    How Linguistic Chickens Help Spot Spoken-Eggs: Phonological Constraints on Speech Identification

    Get PDF
    It has long been known that the identification of aural stimuli as speech is context-dependent (Remez et al., 1981). Here, we demonstrate that the discrimination of speech stimuli from their non-speech transforms is further modulated by their linguistic structure. We gauge the effect of phonological structure on discrimination across different manifestations of well-formedness in two distinct languages. One case examines the restrictions on English syllables (e.g., the well-formed melif vs. ill-formed mlif); another investigates the constraints on Hebrew stems by comparing ill-formed AAB stems (e.g., TiTuG) with well-formed ABB and ABC controls (e.g., GiTuT, MiGuS). In both cases, non-speech stimuli that conform to well-formed structures are harder to discriminate from speech than stimuli that conform to ill-formed structures. Auxiliary experiments rule out alternative acoustic explanations for this phenomenon. In English, we show that acoustic manipulations that mimic the mlif–melif contrast do not impair the classification of non-speech stimuli whose structure is well-formed (i.e., disyllables with phonetically short vs. long tonic vowels). Similarly, non-speech stimuli that are ill-formed in Hebrew present no difficulties to English speakers. Thus, non-speech stimuli are harder to classify only when they are well-formed in the participants’ native language. We conclude that the classification of non-speech stimuli is modulated by their linguistic structure: inputs that support well-formed outputs are more readily classified as speech

    Speech Communication

    Get PDF
    Contains table of contents for Part V, table of contents for Section 1, reports on six research projects and a list of publications.C.J. Lebel FellowshipDennis Klatt Memorial FundNational Institutes of Health Grant R01-DC00075National Institutes of Health Grant R01-DC01291National Institutes of Health Grant R01-DC01925National Institutes of Health Grant R01-DC02125National Institutes of Health Grant R01-DC02978National Institutes of Health Grant R01-DC03007National Institutes of Health Grant R29-DC02525National Institutes of Health Grant F32-DC00194National Institutes of Health Grant F32-DC00205National Institutes of Health Grant T32-DC00038National Science Foundation Grant IRI 89-05249National Science Foundation Grant IRI 93-14967National Science Foundation Grant INT 94-2114

    Acoustic Cue Weighting in Children Wearing Cochlear Implants

    Get PDF
    The purpose of this study was to determine how normal hearing adults (NHA), normal hearing children (NHC) and children wearing cochlear implants (CI) differ in the perceptual weight given cues for fricative consonant and voiceless stop consonant continua. Ten normal-hearing adults (NHA), eleven 5-8-year-old normalhearing children (NHC) and eight 5-8-year-old children wearing cochlear implants (CI) were participants. For fricative consonant perception, the /su/-/∫u/ continua were constructed by varying a fricative spectrum cue in three steps and by varying a F2 onset transition cue in three steps. For voiceless stop consonant perception, the /pu/- /tu/ continua were constructed by varying a burst cue in three steps and a F2 onset transition cue in three steps. A quantitative method of analysis (ANOVA model) was used to determine cue weighting and measure cue interaction. For the fricative consonant, both NHC and NHA gave more perceptual weight to the frication spectral cue than to the formant transition. NHC gave significantly less weight to the fricative spectrum cue than NHA. The weight given the transition cue was similar for NHC and NHA, and the degree of cue interaction was similar between two groups. The CI group gave more perceptual weight to the fricative spectrum cue than to the transition. The degree of cue interaction was not significant for CI. For the voiceless stop consonant, both NHC and NHA gave more perceptual weight to the transition cue than to the burst cue. NHC gave proportionately less weight to the transition cue than NHA. The weight given the burst cue and the degree of cue interaction were similar between NHC and NHA. The CI group gave more perceptual weight to the transition cue than to the burst cue, and there was no significant difference between children wearing cochlear implants and normal hearing children group; however, the degree of cue interaction was not significant for CI. These results indicated that all groups favored the longer-duration cue to make phonemic judgments. Also there were developmental patterns. The CI group has similar cue weighting strategies to agematched NHC, but the integration of the cues was not significant for either fricative or voiceless stop consonant perception
    • 

    corecore