65 research outputs found
Recommended from our members
The role of time in phonetic spaces: Temporal resolution in Cantonese tone perception
The role of temporal resolution in speech perception (e.g. whether tones are parameterized with fundamental frequency sampled every 10 ms, or just twice in the syllable) is sometimes overlooked, and the temporal resolution relevant for tonal perception is still an open question. The choice of temporal resolution matters because how we understand the recognition, dispersion, and learning of phonetic categories is entirely predicated on what parameters we use to define the phonetic space that they lie in. Here, we present a tonal perception experiment in Cantonese where we used interrupted speech in trisyllabic stimuli to study the effect of temporal resolution on human tonal identification. We also performed acoustic classification of the stimuli with support vector machines. Our results show that just a few samples per syllable are enough for humans and machines to classify Cantonese tones with reasonable accuracy, without much difference in performance from having the full speech signal available. The confusion patterns and machine classification results suggest that loss of detailed information about the temporal alignment and shape of fundamental frequency contours was a major cause of decreasing accuracy as resolution decreased. Moreover, machine classification experiments show that for accurate identification of rising tones in Cantonese, it is crucial to extend the temporal window for sampling to the following syllable, due to peak delay
The perception and production of stress and intonation by children with cochlear implants
Users of current cochlear implants have limited access to pitch information and hence to intonation in speech. This seems likely to have an important impact on prosodic
perception. This thesis examines the perception and production of the prosody of stress in children with cochlear implants. The interdependence of perceptual cues to
stress (pitch, timing and loudness) in English is well documented and each of these is considered in analyses of both perception and production. The subject group
comprised 17 implanted (CI) children aged 5;7 to 16;11 and using ACE or SPEAK processing strategies. The aims are to establish (i) the extent to which stress and intonation are conveyed to CI children in synthesised bisyllables (BAba vs. baBA) involving controlled changes in F0, duration and amplitude (Experiment I), and in natural speech involving
compound vs. phrase stress and focus (Experiment II).
(ii) when pitch cues are missing or are inaudible to the listeners, do other cues such as loudness or timing contribute to the perception of stress and intonation?
(iii) whether CI subjects make appropriate use of F0, duration and amplitude to convey linguistic focus in speech production (Experiment III).
Results of Experiment I showed that seven of the subjects were unable to reliably hear pitch differences of 0.84 octaves. Most of the remaining subjects required a large
(approx 0.5 octave) difference to reliably hear a pitch change. Performance of the CI children was poorer than that of a normal hearing group of children presented with an
acoustic cochlear implant simulation. Some of the CI children who could not discriminate F0 differences in Experiment I nevertheless scored above chance in tests
involving focus in natural speech in Experiment II. Similarly, some CI subjects who were above chance in the production of appropriate F0 contours in Experiment III
could not hear F0 differences of 0.84 octaves. These results suggest that CI children may not necessarily rely on F0 cues to stress, and in the absence of F0 or amplitude
cues, duration may provide an alternative cue
Mechanism of disyllabic tonal reduction in Taiwan Mandarin
This study was designed to test the hypothesis that time pressure is a direct cause of tonal reduction in Taiwan Mandarin. Tonal reduction refers to the phenomenon of the tones of a disyllabic unit being contracted into a monosyllabic unit. An experiment was carried out in which six native Taiwan Mandarin male speakers produced sentences containing disyllabic compound words /ma/+/ma/ with varying tonal combinations at different speech rates. Analyses indicated that increasing time pressure led to severe tonal reductions. Articulatory effort, measured by the slope of F0 peak velocity of unidirectional movement over F0 movement amplitude, is insufficient to compensate for duration-dependent undershoot (in particular, when time pressure exceeds certain thresholds). Mechanisms of tonal reduction were further examined by comparing F0 velocity profiles against the Edge-in model, a rule-based phonological model. Results showed that the residual tonal variants in contracted syllables are gradient rather than categoricalâas duration is shortened, the movement towards the desired targets is gradually curtailed
Effects of errorless learning on the acquisition of velopharyngeal movement control
Session 1pSC - Speech Communication: Cross-Linguistic Studies of Speech Sound Learning of the Languages of Hong Kong (Poster Session)The implicit motor learning literature suggests a benefit for learning if errors are minimized during practice. This study investigated whether the same principle holds for learning velopharyngeal movement control. Normal speaking participants learned to produce hypernasal speech in either an errorless learning condition (in which the possibility for errors was limited) or an errorful learning condition (in which the possibility for errors was not limited). Nasality level of the participantsâ speech was measured by nasometer and reflected by nasalance scores (in %). Errorless learners practiced producing hypernasal speech with a threshold nasalance score of 10% at the beginning, which gradually increased to a threshold of 50% at the end. The same set of threshold targets were presented to errorful learners but in a reversed order. Errors were defined by the proportion of speech with a nasalance score below the threshold. The results showed that, relative to errorful learners, errorless learners displayed fewer errors (50.7% vs. 17.7%) and a higher mean nasalance score (31.3% vs. 46.7%) during the acquisition phase. Furthermore, errorless learners outperformed errorful learners in both retention and novel transfer tests. Acknowledgment: Supported by The University of Hong Kong Strategic Research Theme for Sciences of Learning © 2012 Acoustical Society of Americapublished_or_final_versio
Fundamental frequency modelling: an articulatory perspective with target approximation and deep learning
Current statistical parametric speech synthesis (SPSS) approaches typically aim at state/frame-level acoustic modelling, which leads to a problem of frame-by-frame independence. Besides that, whichever learning technique is used, hidden Markov model (HMM), deep neural network (DNN) or recurrent neural network (RNN), the fundamental idea is to set up a direct mapping from linguistic to acoustic features. Although progress is frequently reported, this idea is questionable in terms of biological plausibility. This thesis aims at addressing the above issues by integrating dynamic mechanisms of human speech production as a core component of F0 generation and thus developing a more human-like F0 modelling paradigm. By introducing an articulatory F0 generation model â target approximation (TA) â between text and speech that controls syllable-synchronised F0 generation, contextual F0 variations are processed in two separate yet integrated stages: linguistic to motor, and motor to acoustic. With the goal of demonstrating that human speech movement can be considered as a dynamic process of target approximation and that the TA model is a valid F0 generation model to be used at the motor-to-acoustic stage, a TA-based pitch control experiment is conducted first to simulate the subtle human behaviour of online compensation for pitch-shifted auditory feedback. Then, the TA parameters are collectively controlled by linguistic features via a deep or recurrent neural network (DNN/RNN) at the linguistic-to-motor stage. We trained the systems on a Mandarin Chinese dataset consisting of both statements and questions. The TA-based systems generally outperformed the baseline systems in both objective and subjective evaluations. Furthermore, the amount of required linguistic features were reduced first to syllable level only (with DNN) and then with all positional information removed (with RNN). Fewer linguistic features as input with limited number of TA parameters as output led to less training data and lower model complexity, which in turn led to more efficient training and faster synthesis
Mechanism of extreme phonetic reduction: evidence from Taiwan Mandarin
Extreme reduction refers to the phenomenon where intervocalic consonants are so severely reduced that two or more adjacent syllables appear to be merged into one. Such severe reduction is often considered a characteristic of natural speech and to be closely related to factors including lexical frequency, information load, social context and speaking style. This thesis takes a novel approach to investigating this phenomenon by testing the time pressure account of phonetic reduction, according to which time pressure is the direct cause of extreme reduction. The investigation was done with data from Taiwan Mandarin, a language where extreme reduction (referred to as contraction) has been reported to frequently occur.
Three studies were conducted to test the main hypothesis. In Study 1, native Taiwan Mandarin speakers produced sentences containing nonsense disyllabic words with varying phonetic structures at differing speech rates. Spectral analysis showed that extreme reduction occurred frequently in nonsense words produced under high time pressure. In Study 2a, further examination of formant peak velocity as a function of formant movement amplitude in experimental data suggested that articulatory effort was not decreased during reduction, but in fact likely to be increased. Study 2b examined high frequency words from three spontaneous speech corpora for reduction variations. Results demonstrate that patterns of reduction in high frequency words in spontaneous speech (Study 2b) were similar to those in nonsense words spoken under experimental conditions (Study 2a).
Study 3 investigated tonal reduction with varying tonal contexts and found that tonal reduction can also be explained in terms of time pressure. Analysis of F0 trajectories demonstrates that speakers attempt to reach the original underlying tonal targets even in the case of extreme reduction and that there was no weakening of articulatory effort despite the severe reduction. To further test the main hypothesis, two computational modelling experiments were conducted. The first applied the quantitative Target Approximation model (qTA) for tone and intonation and the second applied the Functional Linear Model (FLM). Results showed that severely reduced F0 trajectories in tone dyads can be regenerated to a high accuracy by qTA using generalized canonical tonal targets with only the syllable duration modified. Additionally, it was shown that using FLM and adjusting duration alone can give a fairly good representation of contracted F0 trajectory shapes.
In summary, results suggest that target undershoot under time pressure is likely to be the direct mechanism of extreme reduction, and factors that have been commonly associated with reduction in previous research very likely have an impact on duration, which in turn determines the degree of target attainment through the time pressure mechanism
On the mechanism of response latencies in auditory nerve fibers
Despite the structural differences of the middle and inner ears, the latency pattern in auditory nerve fibers to an identical sound has been found similar across numerous species. Studies have shown the similarity in remarkable species with distinct cochleae or even without a basilar membrane. This stimulus-, neuron-, and species- independent similarity of latency cannot be simply explained by the concept of cochlear traveling waves that is generally accepted as the main cause of the neural latency pattern.
An original concept of Fourier pattern is defined, intended to characterize a feature of temporal processingâspecifically phase encodingâthat is not readily apparent in more conventional analyses. The pattern is created by marking the first amplitude maximum for each sinusoid component of the stimulus, to encode phase information. The hypothesis is that the hearing organ serves as a running analyzer whose output reflects synchronization of auditory neural activity consistent with the Fourier pattern.
A combined research of experimental, correlational and meta-analysis approaches is used to test the hypothesis. Manipulations included phase encoding and stimuli to test their effects on the predicted latency pattern. Animal studies in the literature using the same stimulus were then compared to determine the degree of relationship.
The results show that each marking accounts for a large percentage of a corresponding peak latency in the peristimulus-time histogram. For each of the stimuli considered, the latency predicted by the Fourier pattern is highly correlated with the observed latency in the auditory nerve fiber of representative species.
The results suggest that the hearing organ analyzes not only amplitude spectrum but also phase information in Fourier analysis, to distribute the specific spikes among auditory nerve fibers and within a single unit.
This phase-encoding mechanism in Fourier analysis is proposed to be the common mechanism that, in the face of species differences in peripheral auditory hardware, accounts for the considerable similarities across species in their latency-by-frequency functions, in turn assuring optimal phase encoding across species. Also, the mechanism has the potential to improve phase encoding of cochlear implants
Recommended from our members
Deep Learning for Automatic Assessment and Feedback of Spoken English
Growing global demand for learning a second language (L2), particularly English, has led to
considerable interest in automatic spoken language assessment, whether for use in computerassisted language learning (CALL) tools or for grading candidates for formal qualifications.
This thesis presents research conducted into the automatic assessment of spontaneous nonnative English speech, with a view to be able to provide meaningful feedback to learners. One
of the challenges in automatic spoken language assessment is giving candidates feedback on
particular aspects, or views, of their spoken language proficiency, in addition to the overall
holistic score normally provided. Another is detecting pronunciation and other types of errors
at the word or utterance level and feeding them back to the learner in a useful way.
It is usually difficult to obtain accurate training data with separate scores for different
views and, as examiners are often trained to give holistic grades, single-view scores can
suffer issues of consistency. Conversely, holistic scores are available for various standard
assessment tasks such as Linguaskill. An investigation is thus conducted into whether
assessment scores linked to particular views of the speakerâs ability can be obtained from
systems trained using only holistic scores.
End-to-end neural systems are designed with structures and forms of input tuned to single
views, specifically each of pronunciation, rhythm, intonation and text. By training each
system on large quantities of candidate data, individual-view information should be possible
to extract. The relationships between the predictions of each system are evaluated to examine
whether they are, in fact, extracting different information about the speaker. Three methods
of combining the systems to predict holistic score are investigated, namely averaging their
predictions and concatenating and attending over their intermediate representations. The
combined graders are compared to each other and to baseline approaches.
The tasks of error detection and error tendency diagnosis become particularly challenging
when the speech in question is spontaneous and particularly given the challenges posed by
the inconsistency of human annotation of pronunciation errors. An approach to these tasks is
presented by distinguishing between lexical errors, wherein the speaker does not know how a
particular word is pronounced, and accent errors, wherein the candidateâs speech exhibits
consistent patterns of phone substitution, deletion and insertion. Three annotated corpora
x
of non-native English speech by speakers of multiple L1s are analysed, the consistency of
human annotation investigated and a method presented for detecting individual accent and
lexical errors and diagnosing accent error tendencies at the speaker level
- âŠ