707 research outputs found
Employing Emotion Cues to Verify Speakers in Emotional Talking Environments
Usually, people talk neutrally in environments where there are no abnormal
talking conditions such as stress and emotion. Other emotional conditions that
might affect people talking tone like happiness, anger, and sadness. Such
emotions are directly affected by the patient health status. In neutral talking
environments, speakers can be easily verified, however, in emotional talking
environments, speakers cannot be easily verified as in neutral talking ones.
Consequently, speaker verification systems do not perform well in emotional
talking environments as they do in neutral talking environments. In this work,
a two-stage approach has been employed and evaluated to improve speaker
verification performance in emotional talking environments. This approach
employs speaker emotion cues (text-independent and emotion-dependent speaker
verification problem) based on both Hidden Markov Models (HMMs) and
Suprasegmental Hidden Markov Models (SPHMMs) as classifiers. The approach is
comprised of two cascaded stages that combines and integrates emotion
recognizer and speaker recognizer into one recognizer. The architecture has
been tested on two different and separate emotional speech databases: our
collected database and Emotional Prosody Speech and Transcripts database. The
results of this work show that the proposed approach gives promising results
with a significant improvement over previous studies and other approaches such
as emotion-independent speaker verification approach and emotion-dependent
speaker verification approach based completely on HMMs.Comment: Journal of Intelligent Systems, Special Issue on Intelligent
Healthcare Systems, De Gruyter, 201
Modeling Individual Cyclic Variation in Human Behavior
Cycles are fundamental to human health and behavior. However, modeling cycles
in time series data is challenging because in most cases the cycles are not
labeled or directly observed and need to be inferred from multidimensional
measurements taken over time. Here, we present CyHMMs, a cyclic hidden Markov
model method for detecting and modeling cycles in a collection of
multidimensional heterogeneous time series data. In contrast to previous cycle
modeling methods, CyHMMs deal with a number of challenges encountered in
modeling real-world cycles: they can model multivariate data with discrete and
continuous dimensions; they explicitly model and are robust to missing data;
and they can share information across individuals to model variation both
within and between individual time series. Experiments on synthetic and
real-world health-tracking data demonstrate that CyHMMs infer cycle lengths
more accurately than existing methods, with 58% lower error on simulated data
and 63% lower error on real-world data compared to the best-performing
baseline. CyHMMs can also perform functions which baselines cannot: they can
model the progression of individual features/symptoms over the course of the
cycle, identify the most variable features, and cluster individual time series
into groups with distinct characteristics. Applying CyHMMs to two real-world
health-tracking datasets -- of menstrual cycle symptoms and physical activity
tracking data -- yields important insights including which symptoms to expect
at each point during the cycle. We also find that people fall into several
groups with distinct cycle patterns, and that these groups differ along
dimensions not provided to the model. For example, by modeling missing data in
the menstrual cycles dataset, we are able to discover a medically relevant
group of birth control users even though information on birth control is not
given to the model.Comment: Accepted at WWW 201
Fusion for Audio-Visual Laughter Detection
Laughter is a highly variable signal, and can express a spectrum of emotions. This makes the automatic detection of laughter a challenging but interesting task. We perform automatic laughter detection using audio-visual data from the AMI Meeting Corpus.
Audio-visual laughter detection is performed by combining (fusing) the results of a separate audio and video classifier on the decision level. The video-classifier uses features based on the principal components of 20 tracked facial points, for audio we use the commonly used PLP and RASTA-PLP features. Our results indicate that RASTA-PLP features outperform PLP features for laughter detection in audio.
We compared hidden Markov models (HMMs), Gaussian mixture models (GMMs) and support vector machines (SVM) based classifiers, and found that RASTA-PLP combined with a GMM resulted in the best performance for the audio modality. The video features classified using a SVM resulted in the best single-modality performance. Fusion on the decision-level resulted in laughter detection with a significantly better performance than single-modality classification
Conflict and Computation on Wikipedia: a Finite-State Machine Analysis of Editor Interactions
What is the boundary between a vigorous argument and a breakdown of
relations? What drives a group of individuals across it? Taking Wikipedia as a
test case, we use a hidden Markov model to approximate the computational
structure and social grammar of more than a decade of cooperation and conflict
among its editors. Across a wide range of pages, we discover a bursty war/peace
structure where the systems can become trapped, sometimes for months, in a
computational subspace associated with significantly higher levels of
conflict-tracking "revert" actions. Distinct patterns of behavior characterize
the lower-conflict subspace, including tit-for-tat reversion. While a fraction
of the transitions between these subspaces are associated with top-down actions
taken by administrators, the effects are weak. Surprisingly, we find no
statistical signal that transitions are associated with the appearance of
particularly anti-social users, and only weak association with significant news
events outside the system. These findings are consistent with transitions being
driven by decentralized processes with no clear locus of control. Models of
belief revision in the presence of a common resource for information-sharing
predict the existence of two distinct phases: a disordered high-conflict phase,
and a frozen phase with spontaneously-broken symmetry. The bistability we
observe empirically may be a consequence of editor turn-over, which drives the
system to a critical point between them.Comment: 23 pages, 3 figures. Matches published version. Code for HMM fitting
available at http://bit.ly/sfihmm ; time series and derived finite state
machines at bit.ly/wiki_hm
Expressive Speech Identifications based on Hidden Markov Model
This paper concerns a sub-area of a larger research field of Affective Computing, focusing on the employment of affect-recognition systems using speech modality. It is proposed that speech-based affect identification systems could play an important role as next generation biometric identification systems that are aimed at determining a person’s ‘state of mind’, or psycho-physiological state. The possible areas for the deployment of voice-affect recognition technology are discussed. Additionally, the experiments and results for emotion identification in speech based on a Hidden Markov Models (HMMs) classifier are also presented. The result from experiment suggests that certain speech feature is more precise to identify certain emotional state, and that happiness is the most difficult emotion to detect
Multimodal music information processing and retrieval: survey and future challenges
Towards improving the performance in various music information processing
tasks, recent studies exploit different modalities able to capture diverse
aspects of music. Such modalities include audio recordings, symbolic music
scores, mid-level representations, motion, and gestural data, video recordings,
editorial or cultural tags, lyrics and album cover arts. This paper critically
reviews the various approaches adopted in Music Information Processing and
Retrieval and highlights how multimodal algorithms can help Music Computing
applications. First, we categorize the related literature based on the
application they address. Subsequently, we analyze existing information fusion
approaches, and we conclude with the set of challenges that Music Information
Retrieval and Sound and Music Computing research communities should focus in
the next years
- …