42,519 research outputs found
Towards Indonesian Speech-Emotion Automatic Recognition (I-SpEAR)
Even though speech-emotion recognition (SER) has been receiving much
attention as research topic, there are still some disputes about which vocal
features can identify certain emotion. Emotion expression is also known to be
differed according to the cultural backgrounds that make it important to study
SER specific to the culture where the language belongs to. Furthermore, only a
few studies addresses the SER in Indonesian which what this study attempts to
explore. In this study, we extract simple features from 3420 voice data
gathered from 38 participants. The features are compared by means of linear
mixed effect model which shows that people who are in emotional and
non-emotional state can be differentiated by their speech duration. Using SVM
and speech duration as input feature, we achieve 76.84% average accuracy in
classifying emotional and non-emotional speech.Comment: 4 pages, 3 tables, published in 4th International Conference on New
Media (Conmedia) on 8-10 Nov. 2017 (http://conmedia.umn.ac.id/) [in print as
in Sept. 17, 2017
Visual Affect Around the World: A Large-scale Multilingual Visual Sentiment Ontology
Every culture and language is unique. Our work expressly focuses on the
uniqueness of culture and language in relation to human affect, specifically
sentiment and emotion semantics, and how they manifest in social multimedia. We
develop sets of sentiment- and emotion-polarized visual concepts by adapting
semantic structures called adjective-noun pairs, originally introduced by Borth
et al. (2013), but in a multilingual context. We propose a new
language-dependent method for automatic discovery of these adjective-noun
constructs. We show how this pipeline can be applied on a social multimedia
platform for the creation of a large-scale multilingual visual sentiment
concept ontology (MVSO). Unlike the flat structure in Borth et al. (2013), our
unified ontology is organized hierarchically by multilingual clusters of
visually detectable nouns and subclusters of emotionally biased versions of
these nouns. In addition, we present an image-based prediction task to show how
generalizable language-specific models are in a multilingual context. A new,
publicly available dataset of >15.6K sentiment-biased visual concepts across 12
languages with language-specific detector banks, >7.36M images and their
metadata is also released.Comment: 11 pages, to appear at ACM MM'1
Detecting Low Rapport During Natural Interactions in Small Groups from Non-Verbal Behaviour
Rapport, the close and harmonious relationship in which interaction partners
are "in sync" with each other, was shown to result in smoother social
interactions, improved collaboration, and improved interpersonal outcomes. In
this work, we are first to investigate automatic prediction of low rapport
during natural interactions within small groups. This task is challenging given
that rapport only manifests in subtle non-verbal signals that are, in addition,
subject to influences of group dynamics as well as inter-personal
idiosyncrasies. We record videos of unscripted discussions of three to four
people using a multi-view camera system and microphones. We analyse a rich set
of non-verbal signals for rapport detection, namely facial expressions, hand
motion, gaze, speaker turns, and speech prosody. Using facial features, we can
detect low rapport with an average precision of 0.7 (chance level at 0.25),
while incorporating prior knowledge of participants' personalities can even
achieve early prediction without a drop in performance. We further provide a
detailed analysis of different feature sets and the amount of information
contained in different temporal segments of the interactions.Comment: 12 pages, 6 figure
Towards Speech Emotion Recognition "in the wild" using Aggregated Corpora and Deep Multi-Task Learning
One of the challenges in Speech Emotion Recognition (SER) "in the wild" is
the large mismatch between training and test data (e.g. speakers and tasks). In
order to improve the generalisation capabilities of the emotion models, we
propose to use Multi-Task Learning (MTL) and use gender and naturalness as
auxiliary tasks in deep neural networks. This method was evaluated in
within-corpus and various cross-corpus classification experiments that simulate
conditions "in the wild". In comparison to Single-Task Learning (STL) based
state of the art methods, we found that our MTL method proposed improved
performance significantly. Particularly, models using both gender and
naturalness achieved more gains than those using either gender or naturalness
separately. This benefit was also found in the high-level representations of
the feature space, obtained from our method proposed, where discriminative
emotional clusters could be observed.Comment: Published in the proceedings of INTERSPEECH, Stockholm, September,
201
Generative theatre of totality
Generative art can be used for creating complex multisensory and multimedia experiences within predetermined aesthetic parameters, characteristic of the performing arts and remarkably suitable to address Moholy-Nagy's Theatre of Totality vision. In generative artworks the artist will usually take on the role of an experience framework designer, and the system evolves freely within that framework and its defined aesthetic boundaries. Most generative art impacts visual arts, music and literature, but there does not seem to be any relevant work exploring the cross-medium potential, and one could confidently state that most generative art outcomes are abstract and visual, or audio. It is the goal of this article to propose a model for the creation of generative performances within the Theatre of Totality's scope, derived from stochastic Lindenmayer systems, where mapping techniques are proposed to address the seven variables addressed by Moholy-Nagy: light, space, plane, form, motion, sound and man ("man" is replaced in this article with "human", except where quoting from the author), with all the inherent complexities
ACCDIST: A Metric for comparing speakers' accents
This paper introduces a new metric for the quantitative assessment of the similarity of speakers' accents. The ACCDIST metric is based on the correlation of inter-segment distance tables across speakers or groups. Basing the metric on segment similarity within a speaker ensures that it is sensitive to the speaker's pronunciation system rather than to his or her voice characteristics. The metric is shown to have an error rate of only 11% on the accent classification of speakers into 14 English regional accents of the British Isles, half the error rate of a metric based on spectral information directly. The metric may also be useful for cluster analysis of accent groups
- …