373 research outputs found
A computational study of expressive facial dynamics in children with autism
Several studies have established that facial expressions of children with autism are often perceived as atypical, awkward or less engaging by typical adult observers. Despite this clear deficit in the quality of facial expression production, very little is understood about its underlying mechanisms and characteristics. This paper takes a computational approach to studying details of facial expressions of children with high functioning autism (HFA). The objective is to uncover those characteristics of facial expressions, notably distinct from those in typically developing children, and which are otherwise difficult to detect by visual inspection. We use motion capture data obtained from subjects with HFA and typically developing subjects while they produced various facial expressions. This data is analyzed to investigate how the overall and local facial dynamics of children with HFA differ from their typically developing peers. Our major observations include reduced complexity in the dynamic facial behavior of the HFA group arising primarily from the eye region
Which phonetic features should pronunciation Instructions focus on? An evaluation on the accentedness of segmental/syllable errors in L2 speech
Many English language instructors are reluctant to incorporate pronunciation instruction into their teaching curriculum (Thomson 2014). One reason for such reluctance is that L2 pronunciation errors are numerous, and there is not enough time for teachers to address all of them (Munro and Derwing 2006; Thomson 2014). The current study aims to help language teachers set priorities for their instruction by identifying the segmental and structural aspects of pronunciation that are most foreign-accented to native speakers of American English. The current study employed a perception experiment. 100 speech samples selected from the Speech Accent Archive (Weinberger 2016) were presented to 110 native American English listeners who listened to and rated the foreign accentedness of each sample on a 9-point rating scale. 20 of these samples portray no segmental or syllable structure L2 errors. The other 80 samples contain a single consonant, vowel, or syllable structure L2 error. The backgrounds of the speakers of these samples came from 52 different native languages. Global prosody of each sample was controlled for by comparing its F0 contour and duration to a native English sample using the Dynamic Time Warping method (Giorgino 2009). The results show that 1) L2 consonant errors in general are judged to be more accented than vowel or syllable structure errors; 2) phonological environment affects accent perception, 3) occurrences of non-English consonants always lead to higher accentedness ratings; 4) among L2 syllable errors, vowel epenthesis is judged to be as accented as consonant substitutions, while deletion is judged to be less accented or not accented at all. The current study, therefore, recommends that language instructors attend to consonant errors in L2 speech while taking into consideration their respective phonological environments
Impact of acoustic similarity on efficiency of verbal information transmission via subtle prosodic cues
In this study, we investigate the effect of tiny acoustic differences on the efficiency of prosodic information transmission. Study participants listened to textually ambiguous sentences, which could be understood with prosodic cues, such as syllable length and pause length. Sentences were uttered in voices similar to the participant’s own voice and in voices dissimilar to their own voice. The participants then identified which of four pictures the speaker was referring to. Both the eye movement and response time of the participants were recorded. Eye tracking and response time results both showed that participants understood the textually ambiguous sentences faster when listening to voices similar to their own. The results also suggest that tiny acoustic features, which do not contain verbal meaning can influence the processing of verbal information
Fog Computing in Medical Internet-of-Things: Architecture, Implementation, and Applications
In the era when the market segment of Internet of Things (IoT) tops the chart
in various business reports, it is apparently envisioned that the field of
medicine expects to gain a large benefit from the explosion of wearables and
internet-connected sensors that surround us to acquire and communicate
unprecedented data on symptoms, medication, food intake, and daily-life
activities impacting one's health and wellness. However, IoT-driven healthcare
would have to overcome many barriers, such as: 1) There is an increasing demand
for data storage on cloud servers where the analysis of the medical big data
becomes increasingly complex, 2) The data, when communicated, are vulnerable to
security and privacy issues, 3) The communication of the continuously collected
data is not only costly but also energy hungry, 4) Operating and maintaining
the sensors directly from the cloud servers are non-trial tasks. This book
chapter defined Fog Computing in the context of medical IoT. Conceptually, Fog
Computing is a service-oriented intermediate layer in IoT, providing the
interfaces between the sensors and cloud servers for facilitating connectivity,
data transfer, and queryable local database. The centerpiece of Fog computing
is a low-power, intelligent, wireless, embedded computing node that carries out
signal conditioning and data analytics on raw data collected from wearables or
other medical sensors and offers efficient means to serve telehealth
interventions. We implemented and tested an fog computing system using the
Intel Edison and Raspberry Pi that allows acquisition, computing, storage and
communication of the various medical data such as pathological speech data of
individuals with speech disorders, Phonocardiogram (PCG) signal for heart rate
estimation, and Electrocardiogram (ECG)-based Q, R, S detection.Comment: 29 pages, 30 figures, 5 tables. Keywords: Big Data, Body Area
Network, Body Sensor Network, Edge Computing, Fog Computing, Medical
Cyberphysical Systems, Medical Internet-of-Things, Telecare, Tele-treatment,
Wearable Devices, Chapter in Handbook of Large-Scale Distributed Computing in
Smart Healthcare (2017), Springe
Automatic Emotion Recognition: Quantifying Dynamics and Structure in Human Behavior.
Emotion is a central part of human interaction, one that has a huge influence on its overall tone and outcome. Today's human-centered interactive technology can greatly benefit from automatic emotion recognition, as the extracted affective information can be used to measure, transmit, and respond to user needs. However, developing such systems is challenging due to the complexity of emotional expressions and their dynamics in terms of the inherent multimodality between audio and visual expressions, as well as the mixed factors of modulation that arise when a person speaks. To overcome these challenges, this thesis presents data-driven approaches that can quantify the underlying dynamics in audio-visual affective behavior. The first set of studies lay the foundation and central motivation of this thesis. We discover that it is crucial to model complex non-linear interactions between audio and visual emotion expressions, and that dynamic emotion patterns can be used in emotion recognition. Next, the understanding of the complex characteristics of emotion from the first set of studies leads us to examine multiple sources of modulation in audio-visual affective behavior. Specifically, we focus on how speech modulates facial displays of emotion. We develop a framework that uses speech signals which alter the temporal dynamics of individual facial regions to temporally segment and classify facial displays of emotion. Finally, we present methods to discover regions of emotionally salient events in a given audio-visual data. We demonstrate that different modalities, such as the upper face, lower face, and speech, express emotion with different timings and time scales, varying for each emotion type. We further extend this idea into another aspect of human behavior: human action events in videos. We show how transition patterns between events can be used for automatically segmenting and classifying action events. Our experimental results on audio-visual datasets show that the proposed systems not only improve performance, but also provide descriptions of how affective behaviors change over time. We conclude this dissertation with the future directions that will innovate three main research topics: machine adaptation for personalized technology, human-human interaction assistant systems, and human-centered multimedia content analysis.PhDElectrical Engineering: SystemsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/133459/1/yelinkim_1.pd
A systematic investigation of gesture kinematics in evolving manual languages in the lab
Item does not contain fulltextSilent gestures consist of complex multi-articulatory movements but are now primarily studied through categorical coding of the referential gesture content. The relation of categorical linguistic content with continuous kinematics is therefore poorly understood. Here, we reanalyzed the video data from a gestural evolution experiment (Motamedi, Schouwstra, Smith, Culbertson, & Kirby, 2019), which showed increases in the systematicity of gesture content over time. We applied computer vision techniques to quantify the kinematics of the original data. Our kinematic analyses demonstrated that gestures become more efficient and less complex in their kinematics over generations of learners. We further detect the systematicity of gesture form on the level of thegesture kinematic interrelations, which directly scales with the systematicity obtained on semantic coding of the gestures. Thus, from continuous kinematics alone, we can tap into linguistic aspects that were previously only approachable through categorical coding of meaning. Finally, going beyond issues of systematicity, we show how unique gesture kinematic dialects emerged over generations as isolated chains of participants gradually diverged over iterations from other chains. We, thereby, conclude that gestures can come to embody the linguistic system at the level of interrelationships between communicative tokens, which should calibrate our theories about form and linguistic content.29 p
Self-supervised learning for robust voice cloning
Voice cloning is a difficult task which requires robust and informative
features incorporated in a high quality TTS system in order to effectively copy
an unseen speaker's voice. In our work, we utilize features learned in a
self-supervised framework via the Bootstrap Your Own Latent (BYOL) method,
which is shown to produce high quality speech representations when specific
audio augmentations are applied to the vanilla algorithm. We further extend the
augmentations in the training procedure to aid the resulting features to
capture the speaker identity and to make them robust to noise and acoustic
conditions. The learned features are used as pre-trained utterance-level
embeddings and as inputs to a Non-Attentive Tacotron based architecture, aiming
to achieve multispeaker speech synthesis without utilizing additional speaker
features. This method enables us to train our model in an unlabeled
multispeaker dataset as well as use unseen speaker embeddings to copy a
speaker's voice. Subjective and objective evaluations are used to validate the
proposed model, as well as the robustness to the acoustic conditions of the
target utterance.Comment: Accepted to INTERSPEECH 202
- …