65 research outputs found

    The perception and production of stress and intonation by children with cochlear implants

    Get PDF
    Users of current cochlear implants have limited access to pitch information and hence to intonation in speech. This seems likely to have an important impact on prosodic perception. This thesis examines the perception and production of the prosody of stress in children with cochlear implants. The interdependence of perceptual cues to stress (pitch, timing and loudness) in English is well documented and each of these is considered in analyses of both perception and production. The subject group comprised 17 implanted (CI) children aged 5;7 to 16;11 and using ACE or SPEAK processing strategies. The aims are to establish (i) the extent to which stress and intonation are conveyed to CI children in synthesised bisyllables (BAba vs. baBA) involving controlled changes in F0, duration and amplitude (Experiment I), and in natural speech involving compound vs. phrase stress and focus (Experiment II). (ii) when pitch cues are missing or are inaudible to the listeners, do other cues such as loudness or timing contribute to the perception of stress and intonation? (iii) whether CI subjects make appropriate use of F0, duration and amplitude to convey linguistic focus in speech production (Experiment III). Results of Experiment I showed that seven of the subjects were unable to reliably hear pitch differences of 0.84 octaves. Most of the remaining subjects required a large (approx 0.5 octave) difference to reliably hear a pitch change. Performance of the CI children was poorer than that of a normal hearing group of children presented with an acoustic cochlear implant simulation. Some of the CI children who could not discriminate F0 differences in Experiment I nevertheless scored above chance in tests involving focus in natural speech in Experiment II. Similarly, some CI subjects who were above chance in the production of appropriate F0 contours in Experiment III could not hear F0 differences of 0.84 octaves. These results suggest that CI children may not necessarily rely on F0 cues to stress, and in the absence of F0 or amplitude cues, duration may provide an alternative cue

    Sequential grouping constraints on across-channel auditory processing

    Get PDF

    Mechanism of disyllabic tonal reduction in Taiwan Mandarin

    Get PDF
    This study was designed to test the hypothesis that time pressure is a direct cause of tonal reduction in Taiwan Mandarin. Tonal reduction refers to the phenomenon of the tones of a disyllabic unit being contracted into a monosyllabic unit. An experiment was carried out in which six native Taiwan Mandarin male speakers produced sentences containing disyllabic compound words /ma/+/ma/ with varying tonal combinations at different speech rates. Analyses indicated that increasing time pressure led to severe tonal reductions. Articulatory effort, measured by the slope of F0 peak velocity of unidirectional movement over F0 movement amplitude, is insufficient to compensate for duration-dependent undershoot (in particular, when time pressure exceeds certain thresholds). Mechanisms of tonal reduction were further examined by comparing F0 velocity profiles against the Edge-in model, a rule-based phonological model. Results showed that the residual tonal variants in contracted syllables are gradient rather than categorical—as duration is shortened, the movement towards the desired targets is gradually curtailed

    Effects of errorless learning on the acquisition of velopharyngeal movement control

    Get PDF
    Session 1pSC - Speech Communication: Cross-Linguistic Studies of Speech Sound Learning of the Languages of Hong Kong (Poster Session)The implicit motor learning literature suggests a benefit for learning if errors are minimized during practice. This study investigated whether the same principle holds for learning velopharyngeal movement control. Normal speaking participants learned to produce hypernasal speech in either an errorless learning condition (in which the possibility for errors was limited) or an errorful learning condition (in which the possibility for errors was not limited). Nasality level of the participants’ speech was measured by nasometer and reflected by nasalance scores (in %). Errorless learners practiced producing hypernasal speech with a threshold nasalance score of 10% at the beginning, which gradually increased to a threshold of 50% at the end. The same set of threshold targets were presented to errorful learners but in a reversed order. Errors were defined by the proportion of speech with a nasalance score below the threshold. The results showed that, relative to errorful learners, errorless learners displayed fewer errors (50.7% vs. 17.7%) and a higher mean nasalance score (31.3% vs. 46.7%) during the acquisition phase. Furthermore, errorless learners outperformed errorful learners in both retention and novel transfer tests. Acknowledgment: Supported by The University of Hong Kong Strategic Research Theme for Sciences of Learning © 2012 Acoustical Society of Americapublished_or_final_versio

    Fundamental frequency modelling: an articulatory perspective with target approximation and deep learning

    Get PDF
    Current statistical parametric speech synthesis (SPSS) approaches typically aim at state/frame-level acoustic modelling, which leads to a problem of frame-by-frame independence. Besides that, whichever learning technique is used, hidden Markov model (HMM), deep neural network (DNN) or recurrent neural network (RNN), the fundamental idea is to set up a direct mapping from linguistic to acoustic features. Although progress is frequently reported, this idea is questionable in terms of biological plausibility. This thesis aims at addressing the above issues by integrating dynamic mechanisms of human speech production as a core component of F0 generation and thus developing a more human-like F0 modelling paradigm. By introducing an articulatory F0 generation model – target approximation (TA) – between text and speech that controls syllable-synchronised F0 generation, contextual F0 variations are processed in two separate yet integrated stages: linguistic to motor, and motor to acoustic. With the goal of demonstrating that human speech movement can be considered as a dynamic process of target approximation and that the TA model is a valid F0 generation model to be used at the motor-to-acoustic stage, a TA-based pitch control experiment is conducted first to simulate the subtle human behaviour of online compensation for pitch-shifted auditory feedback. Then, the TA parameters are collectively controlled by linguistic features via a deep or recurrent neural network (DNN/RNN) at the linguistic-to-motor stage. We trained the systems on a Mandarin Chinese dataset consisting of both statements and questions. The TA-based systems generally outperformed the baseline systems in both objective and subjective evaluations. Furthermore, the amount of required linguistic features were reduced first to syllable level only (with DNN) and then with all positional information removed (with RNN). Fewer linguistic features as input with limited number of TA parameters as output led to less training data and lower model complexity, which in turn led to more efficient training and faster synthesis

    SĂžren Buus. Thirty years of psychoacoustic inspiration

    Get PDF

    Mechanism of extreme phonetic reduction: evidence from Taiwan Mandarin

    Get PDF
    Extreme reduction refers to the phenomenon where intervocalic consonants are so severely reduced that two or more adjacent syllables appear to be merged into one. Such severe reduction is often considered a characteristic of natural speech and to be closely related to factors including lexical frequency, information load, social context and speaking style. This thesis takes a novel approach to investigating this phenomenon by testing the time pressure account of phonetic reduction, according to which time pressure is the direct cause of extreme reduction. The investigation was done with data from Taiwan Mandarin, a language where extreme reduction (referred to as contraction) has been reported to frequently occur. Three studies were conducted to test the main hypothesis. In Study 1, native Taiwan Mandarin speakers produced sentences containing nonsense disyllabic words with varying phonetic structures at differing speech rates. Spectral analysis showed that extreme reduction occurred frequently in nonsense words produced under high time pressure. In Study 2a, further examination of formant peak velocity as a function of formant movement amplitude in experimental data suggested that articulatory effort was not decreased during reduction, but in fact likely to be increased. Study 2b examined high frequency words from three spontaneous speech corpora for reduction variations. Results demonstrate that patterns of reduction in high frequency words in spontaneous speech (Study 2b) were similar to those in nonsense words spoken under experimental conditions (Study 2a). Study 3 investigated tonal reduction with varying tonal contexts and found that tonal reduction can also be explained in terms of time pressure. Analysis of F0 trajectories demonstrates that speakers attempt to reach the original underlying tonal targets even in the case of extreme reduction and that there was no weakening of articulatory effort despite the severe reduction. To further test the main hypothesis, two computational modelling experiments were conducted. The first applied the quantitative Target Approximation model (qTA) for tone and intonation and the second applied the Functional Linear Model (FLM). Results showed that severely reduced F0 trajectories in tone dyads can be regenerated to a high accuracy by qTA using generalized canonical tonal targets with only the syllable duration modified. Additionally, it was shown that using FLM and adjusting duration alone can give a fairly good representation of contracted F0 trajectory shapes. In summary, results suggest that target undershoot under time pressure is likely to be the direct mechanism of extreme reduction, and factors that have been commonly associated with reduction in previous research very likely have an impact on duration, which in turn determines the degree of target attainment through the time pressure mechanism

    On the mechanism of response latencies in auditory nerve fibers

    Get PDF
    Despite the structural differences of the middle and inner ears, the latency pattern in auditory nerve fibers to an identical sound has been found similar across numerous species. Studies have shown the similarity in remarkable species with distinct cochleae or even without a basilar membrane. This stimulus-, neuron-, and species- independent similarity of latency cannot be simply explained by the concept of cochlear traveling waves that is generally accepted as the main cause of the neural latency pattern. An original concept of Fourier pattern is defined, intended to characterize a feature of temporal processing—specifically phase encoding—that is not readily apparent in more conventional analyses. The pattern is created by marking the first amplitude maximum for each sinusoid component of the stimulus, to encode phase information. The hypothesis is that the hearing organ serves as a running analyzer whose output reflects synchronization of auditory neural activity consistent with the Fourier pattern. A combined research of experimental, correlational and meta-analysis approaches is used to test the hypothesis. Manipulations included phase encoding and stimuli to test their effects on the predicted latency pattern. Animal studies in the literature using the same stimulus were then compared to determine the degree of relationship. The results show that each marking accounts for a large percentage of a corresponding peak latency in the peristimulus-time histogram. For each of the stimuli considered, the latency predicted by the Fourier pattern is highly correlated with the observed latency in the auditory nerve fiber of representative species. The results suggest that the hearing organ analyzes not only amplitude spectrum but also phase information in Fourier analysis, to distribute the specific spikes among auditory nerve fibers and within a single unit. This phase-encoding mechanism in Fourier analysis is proposed to be the common mechanism that, in the face of species differences in peripheral auditory hardware, accounts for the considerable similarities across species in their latency-by-frequency functions, in turn assuring optimal phase encoding across species. Also, the mechanism has the potential to improve phase encoding of cochlear implants
    • 

    corecore