591 research outputs found
Verification of feature regions for stops and fricatives in natural speech
The presence of acoustic cues and their importance in speech perception have
long remained debatable topics. In spite of several studies that exist in this
eld, very little is known about what exactly humans perceive in speech. This
research takes a novel approach towards understanding speech perception. A
new method, named three-dimensional deep search (3DDS), was developed
to explore the perceptual cues of 16 consonant-vowel (CV) syllables, namely
/pa/, /ta/, /ka/, /ba/, /da/, /ga/, /fa/, /Ta/, /sa/, /Sa/, /va/, /Da/, /za/,
/Za/, from naturally produced speech. A veri cation experiment was then
conducted to further verify the ndings of the 3DDS method. For this pur-
pose, the time-frequency coordinate that de nes each CV was ltered out
using the short-time Fourier transform (STFT), and perceptual tests were
then conducted. A comparison between unmodi ed speech sounds and those
without the acoustic cues was made. In most of the cases, the scores dropped
from 100% to chance levels even at 12 dB SNR. This clearly emphasizes the
importance of features in identifying each CV. The results con rm earlier
ndings that stops are characterized by a short-duration burst preceding the
vowel by 10 cs in the unvoiced case, and appearing almost coincident
with the vowel in the voiced case. As has been previously hypothesized,
we con rmed that the F2 transition plays no signi cant role in consonant
identi cation. 3DDS analysis labels the /sa/ and /za/ perceptual features
as an intense frication noise around 4 kHz, preceding the vowel by 15{20
cs, with the /za/ feature being around 5 cs shorter in duration than that
of /sa/; the /Sa/ and /Za/ events are found to be frication energy near 2
kHz, preceding the vowel by 17{20 cs. /fa/ has a relatively weak burst and
frication energy over a wide-band including 2{6 kHz, while /va/ has a cue
in the 1.5 kHz mid-frequency region preceding the vowel by 7{10 cs. New
information is established regarding /Da/ and /Ta/, especially with regards
to the nature of their signi cant confusions
Automatic Estimation of Intelligibility Measure for Consonants in Speech
In this article, we provide a model to estimate a real-valued measure of the
intelligibility of individual speech segments. We trained regression models
based on Convolutional Neural Networks (CNN) for stop consonants
\textipa{/p,t,k,b,d,g/} associated with vowel \textipa{/A/}, to estimate the
corresponding Signal to Noise Ratio (SNR) at which the Consonant-Vowel (CV)
sound becomes intelligible for Normal Hearing (NH) ears. The intelligibility
measure for each sound is called SNR, and is defined to be the SNR level
at which human participants are able to recognize the consonant at least 90\%
correctly, on average, as determined in prior experiments with NH subjects.
Performance of the CNN is compared to a baseline prediction based on automatic
speech recognition (ASR), specifically, a constant offset subtracted from the
SNR at which the ASR becomes capable of correctly labeling the consonant.
Compared to baseline, our models were able to accurately estimate the
SNR~intelligibility measure with less than 2 [dB] Mean Squared Error
(MSE) on average, while the baseline ASR-defined measure computes
SNR~with a variance of 5.2 to 26.6 [dB], depending on the consonant.Comment: 5 pages, 1 figure, 7 tables, submitted to Inter Speech 2020
Conferenc
- …