5,138 research outputs found
On Representation of Fundamental Frequency of Speech for Prosody Analysis Using Reliability Function.
This paper highlights on a method that provides a new
prosodic feature called βF0 reliability fieldβ based on a reliability
function of the fundamental frequency (F0). The
proposed method does not employ any correction process
for F0 estimation errors that occur during automatic F0
extraction. By applying this feature as a score function
for prosodic analyses like prosodic structure estimation
or superpositional modeling of prosodic commands, these
prosodic information could be acquired with higher accuracy.
The feature has been applied to βF0 template matching
methodβ, which detects accent phrase boundaries in
Japanese continuous speech. The experimental results
show that compared to the conventional F0 contour, the
proposed feature overcomes the harmful influence caused
by F0 errors
Fundamental frequency height as a resource for the management of overlap in talk-in-interaction.
Overlapping talk is common in talk-in-interaction. Much of the previous research on this topic agrees that speaker overlaps can be either turn competitive or noncompetitive. An investigation of the differences in prosodic design between these two classes of overlaps can offer insight into how speakers use and orient to prosody as a resource for turn competition.
In this paper, we investigate the role of fundamental frequency (F0) as a resource for turn competition in overlapping speech. Our methodological approach combines detailed conversation analysis of overlap instances with acoustic measurements of F0 in the overlapping sequence and in its local context. The analyses are based on a collection of overlap instances drawn from the ICSI Meeting corpus. We found that overlappers mark an overlapping incoming as competitive by raising F0 above their norm for turn beginnings, and retaining this higher F0 until the point of overlap resolution. Overlappees may respond to these competitive incomings by returning competition, in which case they raise their F0 too. Our results thus provide instrumental support for earlier claims made on impressionistic evidence, namely that participants in talk-in-interaction systematically manipulate F0 height when competing for the turn
Cue Phrase Classification Using Machine Learning
Cue phrases may be used in a discourse sense to explicitly signal discourse
structure, but also in a sentential sense to convey semantic rather than
structural information. Correctly classifying cue phrases as discourse or
sentential is critical in natural language processing systems that exploit
discourse structure, e.g., for performing tasks such as anaphora resolution and
plan recognition. This paper explores the use of machine learning for
classifying cue phrases as discourse or sentential. Two machine learning
programs (Cgrendel and C4.5) are used to induce classification models from sets
of pre-classified cue phrases and their features in text and speech. Machine
learning is shown to be an effective technique for not only automating the
generation of classification models, but also for improving upon previous
results. When compared to manually derived classification models already in the
literature, the learned models often perform with higher accuracy and contain
new linguistic insights into the data. In addition, the ability to
automatically construct classification models makes it easier to comparatively
analyze the utility of alternative feature representations of the data.
Finally, the ease of retraining makes the learning approach more scalable and
flexible than manual methods.Comment: 42 pages, uses jair.sty, theapa.bst, theapa.st
Responses to intensity-shifted auditory feedback during running speech
PURPOSE: Responses to intensity perturbation during running speech were measured to understand whether prosodic features are controlled in an independent or integrated manner. METHOD: Nineteen English-speaking healthy adults (age range = 21-41 years) produced 480 sentences in which emphatic stress was placed on either the 1st or 2nd word. One participant group received an upward intensity perturbation during stressed word production, and the other group received a downward intensity perturbation. Compensations for perturbation were evaluated by comparing differences in participants' stressed and unstressed peak fundamental frequency (F0), peak intensity, and word duration during perturbed versus baseline trials. RESULTS: Significant increases in stressed-unstressed peak intensities were observed during the ramp and perturbation phases of the experiment in the downward group only. Compensations for F0 and duration did not reach significance for either group. CONCLUSIONS: Consistent with previous work, speakers appear sensitive to auditory perturbations that affect a desired linguistic goal. In contrast to previous work on F0 perturbation that supported an integrated-channel model of prosodic control, the current work only found evidence for intensity-specific compensation. This discrepancy may suggest different F0 and intensity control mechanisms, threshold-dependent prosodic modulation, or a combined control scheme.R01 DC002852 - NIDCD NIH HHS; R03 DC011159 - NIDCD NIH HH
Speech-driven Animation with Meaningful Behaviors
Conversational agents (CAs) play an important role in human computer
interaction. Creating believable movements for CAs is challenging, since the
movements have to be meaningful and natural, reflecting the coupling between
gestures and speech. Studies in the past have mainly relied on rule-based or
data-driven approaches. Rule-based methods focus on creating meaningful
behaviors conveying the underlying message, but the gestures cannot be easily
synchronized with speech. Data-driven approaches, especially speech-driven
models, can capture the relationship between speech and gestures. However, they
create behaviors disregarding the meaning of the message. This study proposes
to bridge the gap between these two approaches overcoming their limitations.
The approach builds a dynamic Bayesian network (DBN), where a discrete variable
is added to constrain the behaviors on the underlying constraint. The study
implements and evaluates the approach with two constraints: discourse functions
and prototypical behaviors. By constraining on the discourse functions (e.g.,
questions), the model learns the characteristic behaviors associated with a
given discourse class learning the rules from the data. By constraining on
prototypical behaviors (e.g., head nods), the approach can be embedded in a
rule-based system as a behavior realizer creating trajectories that are timely
synchronized with speech. The study proposes a DBN structure and a training
approach that (1) models the cause-effect relationship between the constraint
and the gestures, (2) initializes the state configuration models increasing the
range of the generated behaviors, and (3) captures the differences in the
behaviors across constraints by enforcing sparse transitions between shared and
exclusive states per constraint. Objective and subjective evaluations
demonstrate the benefits of the proposed approach over an unconstrained model.Comment: 13 pages, 12 figures, 5 table
μ΄μ¨ μ 보λ₯Ό μ΄μ©ν λ§λΉλ§μ₯μ μμ± μλ κ²μΆ λ° νκ°
νμλ
Όλ¬Έ (μμ¬) -- μμΈλνκ΅ λνμ : μΈλ¬Έλν μΈμ΄νκ³Ό, 2020. 8. Minhwa Chung.λ§μ₯μ λ μ κ²½κ³ λλ ν΄νμ± μ§νμμ κ°μ₯ 빨리 λνλλ μ¦ μ μ€ νλμ΄λ€. λ§λΉλ§μ₯μ λ νν¨μ¨λ³, λμ± λ§λΉ, κ·ΌμμΆμ± μΈ‘μ κ²½νμ¦, λ€λ°μ± κ²½νμ¦ νμ λ± λ€μν νμκ΅°μμ λνλλ€. λ§λΉλ§μ₯μ λ μ‘°μκΈ°κ΄ μ κ²½μ μμμΌλ‘ λΆμ νν μ‘°μμ μ£Όμ νΉμ§μΌλ‘ κ°μ§κ³ , μ΄μ¨μλ μν₯μ λ―ΈμΉλ κ²μΌλ‘ λ³΄κ³ λλ€. μ ν μ°κ΅¬μμλ μ΄μ¨ κΈ°λ° μΈ‘μ μΉλ₯Ό λΉμ₯μ λ°νμ λ§λΉλ§μ₯μ λ°νλ₯Ό ꡬλ³νλ κ²μ μ¬μ©νλ€. μμ νμ₯μμλ λ§λΉλ§μ₯μ μ λν μ΄μ¨ κΈ°λ° λΆμμ΄ λ§λΉλ§μ₯μ λ₯Ό μ§λ¨νκ±°λ μ₯μ μμμ λ°λ₯Έ μλ§μ μΉλ£λ²μ μ€λΉνλ κ²μ λμμ΄ λ κ²μ΄λ€. λ°λΌμ λ§λΉλ§μ₯μ κ° μ΄μ¨μ μν₯μ λ―ΈμΉλ μμλΏλ§ μλλΌ λ§λΉλ§μ₯μ μ μ΄μ¨ νΉμ§μ κΈ΄λ°νκ² μ΄ν΄λ³΄λ κ²μ΄ νμνλ€. ꡬ체 μ μΌλ‘, μ΄μ¨μ΄ μ΄λ€ μΈ‘λ©΄μμ λ§λΉλ§μ₯μ μ μν₯μ λ°λμ§, κ·Έλ¦¬κ³ μ΄μ¨ μ κ° μ₯μ μ λμ λ°λΌ μ΄λ»κ² λ€λ₯΄κ² λνλλμ§μ λν λΆμμ΄ νμνλ€. λ³Έ λ
Όλ¬Έμ μλμ΄, μμ§, λ§μλ, λ¦¬λ¬ λ± μ΄μ¨μ λ€μν μΈ‘λ©΄μ μ μ΄ν΄λ³΄κ³ , λ§λΉλ§μ₯μ κ²μΆ λ° νκ°μ μ¬μ©νμλ€. μΆμΆλ μ΄μ¨ νΉμ§λ€μ λͺ κ°μ§ νΉμ§ μ ν μκ³ λ¦¬μ¦μ ν΅ν΄ μ΅μ νλμ΄ λ¨Έμ λ¬λ κΈ°λ° λΆλ₯κΈ°μ μ
λ ₯κ°μΌλ‘ μ¬μ©λμλ€. λΆλ₯κΈ°μ μ±λ₯μ μ νλ, μ λ°λ, μ¬νμ¨, F1-μ μλ‘ νκ°λμλ€. λν, λ³Έ λ
Όλ¬Έμ μ₯μ μ€μ¦λ(κ²½λ, μ€λ±λ, μ¬λ)μ λ°λΌ μ΄μ¨ μ 보 μ¬μ©μ μ μ©μ±μ λΆμνμλ€. λ§μ§λ§μΌλ‘, μ₯μ λ°ν μμ§μ΄ μ΄λ €μ΄ λ§νΌ, λ³Έ μ°κ΅¬λ κ΅μ°¨ μΈμ΄ λΆλ₯κΈ°λ₯Ό μ¬μ©νμλ€. νκ΅μ΄μ μμ΄ μ₯μ λ°νκ° νλ ¨ μ
μΌλ‘ μ¬μ©λμμΌλ©°, ν
μ€νΈμ
μΌλ‘λ κ° λͺ©ν μΈμ΄λ§μ΄ μ¬μ©λμλ€. μ€ν κ²°κ³Όλ λ€μκ³Ό κ°μ΄ μΈ κ°μ§λ₯Ό μμ¬νλ€. 첫째, μ΄μ¨ μ 보 λ₯Ό μ¬μ©νλ κ²μ λ§λΉλ§μ₯μ κ²μΆ λ° νκ°μ λμμ΄ λλ€. MFCC λ§μ μ¬μ©νμ λμ λΉκ΅νμ λ, μ΄μ¨ μ 보λ₯Ό ν¨κ» μ¬μ©νλ κ²μ΄ νκ΅μ΄μ μμ΄ λ°μ΄ν°μ
λͺ¨λμμ λμμ΄ λμλ€. λμ§Έ, μ΄μ¨ μ 보λ νκ°μ νΉν μ μ©νλ€. μμ΄μ κ²½μ° κ²μΆκ³Ό νκ°μμ κ°κ° 1.82%μ 20.6%μ μλμ μ νλ ν₯μμ 보μλ€. νκ΅μ΄μ κ²½μ° κ²μΆμμλ ν₯μμ 보μ΄μ§ μμμ§λ§, νκ°μμλ 13.6%μ μλμ ν₯μμ΄ λνλ¬λ€. μ
μ§Έ, κ΅μ°¨ μΈμ΄ λΆλ₯κΈ°λ λ¨μΌ μΈμ΄ λΆλ₯κΈ°λ³΄λ€ ν₯μλ κ²°κ³Όλ₯Ό 보μΈλ€. μ€ν κ²°κ³Ό κ΅μ°¨μΈμ΄ λΆλ₯κΈ°λ λ¨μΌ μΈμ΄ λΆλ₯κΈ°μ λΉκ΅νμ λ μλμ μΌλ‘ 4.12% λμ μ νλλ₯Ό 보μλ€. μ΄κ²μ νΉμ μ΄μ¨ μ₯μ λ λ²μΈμ΄μ νΉμ§μ κ°μ§λ©°, λ€λ₯Έ μΈμ΄ λ°μ΄ν°λ₯Ό ν¬ν¨μμΌ λ°μ΄ν°κ° λΆμ‘±ν νλ ¨ μ
μ 보μν μ μ μμ μμ¬νλ€.One of the earliest cues for neurological or degenerative disorders are speech impairments. Individuals with Parkinsons Disease, Cerebral Palsy, Amyotrophic lateral Sclerosis, Multiple Sclerosis among others are often diagnosed with dysarthria. Dysarthria is a group of speech disorders mainly affecting the articulatory muscles which eventually leads to severe misarticulation. However, impairments in the suprasegmental domain are also present and previous studies have shown that the prosodic patterns of speakers with dysarthria differ from the prosody of healthy speakers. In a clinical setting, a prosodic-based analysis of dysarthric speech can be helpful for diagnosing the presence of dysarthria. Therefore, there is a need to not only determine how the prosody of speech is affected by dysarthria, but also what aspects of prosody are more affected and how prosodic impairments change by the severity of dysarthria.
In the current study, several prosodic features related to pitch, voice quality, rhythm and speech rate are used as features for detecting dysarthria in a given speech signal. A variety of feature selection methods are utilized to determine which set of features are optimal for accurate detection. After selecting an optimal set of prosodic features we use them as input to machine learning-based classifiers and assess the performance using the evaluation metrics: accuracy, precision, recall and F1-score. Furthermore, we examine the usefulness of prosodic measures for assessing different levels of severity (e.g. mild, moderate, severe). Finally, as collecting impaired speech data can be difficult, we also implement cross-language classifiers where both Korean and English data are used for training but only one language used for testing. Results suggest that in comparison to solely using Mel-frequency cepstral coefficients, including prosodic measurements can improve the accuracy of classifiers for both Korean and English datasets. In particular, large improvements were seen when assessing different severity levels. For English a relative accuracy improvement of 1.82% for detection and 20.6% for assessment was seen. The Korean dataset saw no improvements for detection but a relative improvement of 13.6% for assessment. The results from cross-language experiments showed a relative improvement of up to 4.12% in comparison to only using a single language during training. It was found that certain prosodic impairments such as pitch and duration may be language independent. Therefore, when training sets of individual languages are limited, they may be supplemented by including data from other languages.1. Introduction 1
1.1. Dysarthria 1
1.2. Impaired Speech Detection 3
1.3. Research Goals & Outline 6
2. Background Research 8
2.1. Prosodic Impairments 8
2.1.1. English 8
2.1.2. Korean 10
2.2. Machine Learning Approaches 12
3. Database 18
3.1. English-TORGO 20
3.2. Korean-QoLT 21
4. Methods 23
4.1. Prosodic Features 23
4.1.1. Pitch 23
4.1.2. Voice Quality 26
4.1.3. Speech Rate 29
4.1.3. Rhythm 30
4.2. Feature Selection 34
4.3. Classification Models 38
4.3.1. Random Forest 38
4.3.1. Support Vector Machine 40
4.3.1 Feed-Forward Neural Network 42
4.4. Mel-Frequency Cepstral Coefficients 43
5. Experiment 46
5.1. Model Parameters 47
5.2. Training Procedure 48
5.2.1. Dysarthria Detection 48
5.2.2. Severity Assessment 50
5.2.3. Cross-Language 51
6. Results 52
6.1. TORGO 52
6.1.1. Dysarthria Detection 52
6.1.2. Severity Assessment 56
6.2. QoLT 57
6.2.1. Dysarthria Detection 57
6.2.2. Severity Assessment 58
6.1. Cross-Language 59
7. Discussion 62
7.1. Linguistic Implications 62
7.2. Clinical Applications 65
8. Conclusion 67
References 69
Appendix 76
Abstract in Korean 79Maste
Predicting continuous conflict perception with Bayesian Gaussian processes
Conflict is one of the most important phenomena of social life, but it is still largely neglected by the computing community. This work proposes an approach
that detects common conversational social signals (loudness, overlapping speech,
etc.) and predicts the conflict level perceived by human observers in continuous,
non-categorical terms. The proposed regression approach is fully Bayesian and it
adopts Automatic Relevance Determination to identify the social signals that influence most the outcome of the prediction. The experiments are performed over the SSPNet Conflict Corpus, a publicly available collection of 1430 clips extracted from televised political debates (roughly 12 hours of material for 138 subjects in total). The results show that it is possible to achieve a correlation close to 0.8 between actual and predicted conflict perception
- β¦