2,971 research outputs found
Visual Affect Around the World: A Large-scale Multilingual Visual Sentiment Ontology
Every culture and language is unique. Our work expressly focuses on the
uniqueness of culture and language in relation to human affect, specifically
sentiment and emotion semantics, and how they manifest in social multimedia. We
develop sets of sentiment- and emotion-polarized visual concepts by adapting
semantic structures called adjective-noun pairs, originally introduced by Borth
et al. (2013), but in a multilingual context. We propose a new
language-dependent method for automatic discovery of these adjective-noun
constructs. We show how this pipeline can be applied on a social multimedia
platform for the creation of a large-scale multilingual visual sentiment
concept ontology (MVSO). Unlike the flat structure in Borth et al. (2013), our
unified ontology is organized hierarchically by multilingual clusters of
visually detectable nouns and subclusters of emotionally biased versions of
these nouns. In addition, we present an image-based prediction task to show how
generalizable language-specific models are in a multilingual context. A new,
publicly available dataset of >15.6K sentiment-biased visual concepts across 12
languages with language-specific detector banks, >7.36M images and their
metadata is also released.Comment: 11 pages, to appear at ACM MM'1
Cross-Corpus Multilingual Speech Emotion Recognition: Amharic vs. Other Languages
In a conventional Speech emotion recognition (SER) task, a classifier for a
given language is trained on a pre-existing dataset for that same language.
However, where training data for a language does not exist, data from other
languages can be used instead. We experiment with cross-lingual and
multilingual SER, working with Amharic, English, German and URDU. For Amharic,
we use our own publicly-available Amharic Speech Emotion Dataset (ASED). For
English, German and Urdu we use the existing RAVDESS, EMO-DB and URDU datasets.
We followed previous research in mapping labels for all datasets to just two
classes, positive and negative. Thus we can compare performance on different
languages directly, and combine languages for training and testing. In
Experiment 1, monolingual SER trials were carried out using three classifiers,
AlexNet, VGGE (a proposed variant of VGG), and ResNet50. Results averaged for
the three models were very similar for ASED and RAVDESS, suggesting that
Amharic and English SER are equally difficult. Similarly, German SER is more
difficult, and Urdu SER is easier. In Experiment 2, we trained on one language
and tested on another, in both directions for each pair: AmharicGerman,
AmharicEnglish, and AmharicUrdu. Results with Amharic as target suggested
that using English or German as source will give the best result. In Experiment
3, we trained on several non-Amharic languages and then tested on Amharic. The
best accuracy obtained was several percent greater than the best accuracy in
Experiment 2, suggesting that a better result can be obtained when using two or
three non-Amharic languages for training than when using just one non-Amharic
language. Overall, the results suggest that cross-lingual and multilingual
training can be an effective strategy for training a SER classifier when
resources for a language are scarce.Comment: 16 pages, 9 tables, 5 figure
Unsupervised Adversarial Domain Adaptation for Cross-Lingual Speech Emotion Recognition
Cross-lingual speech emotion recognition (SER) is a crucial task for many
real-world applications. The performance of SER systems is often degraded by
the differences in the distributions of training and test data. These
differences become more apparent when training and test data belong to
different languages, which cause a significant performance gap between the
validation and test scores. It is imperative to build more robust models that
can fit in practical applications of SER systems. Therefore, in this paper, we
propose a Generative Adversarial Network (GAN)-based model for multilingual
SER. Our choice of using GAN is motivated by their great success in learning
the underlying data distribution. The proposed model is designed in such a way
that can learn language invariant representations without requiring
target-language data labels. We evaluate our proposed model on four different
language emotional datasets, including an Urdu-language dataset to also
incorporate alternative languages for which labelled data is difficult to find
and which have not been studied much by the mainstream community. Our results
show that our proposed model can significantly improve the baseline
cross-lingual SER performance for all the considered datasets including the
non-mainstream Urdu language data without requiring any labels.Comment: Accepted in Affective Computing & Intelligent Interaction (ACII 2019
Transfer Learning for Speech and Language Processing
Transfer learning is a vital technique that generalizes models trained for
one setting or task to other settings or tasks. For example in speech
recognition, an acoustic model trained for one language can be used to
recognize speech in another language, with little or no re-training data.
Transfer learning is closely related to multi-task learning (cross-lingual vs.
multilingual), and is traditionally studied in the name of `model adaptation'.
Recent advance in deep learning shows that transfer learning becomes much
easier and more effective with high-level abstract features learned by deep
models, and the `transfer' can be conducted not only between data distributions
and data types, but also between model structures (e.g., shallow nets and deep
nets) or even model types (e.g., Bayesian models and neural models). This
review paper summarizes some recent prominent research towards this direction,
particularly for speech and language processing. We also report some results
from our group and highlight the potential of this very interesting research
field.Comment: 13 pages, APSIPA 201
Identyfikacja parametrów czasowych mowy spontanicznej mówców kryminalistycznych w przypadku niedopasowania językowego: język serbski jako L1 i język angielski jako L2
The purpose of the research is to examine the possibility of forensic speaker identification if question and suspect sample are in different languages using temporal parameters (articulation rate, speaking rate, degree of hesitancy, percentage of pauses, average pause duration). The corpus includes 10 female native speakers of Serbian who are proficient in English. The parameters are tested using Bayesian likelihood ratio formula in 40 same-speaker and 360 different-speaker pairs, including estimation of error rates, equal error rates and Overall Likelihood Ratio. One-way ANOVA is performed to determine whether inter-speaker variability is higher than intra- speaker variability across languages. The most successful discriminant is degree of hesitancy with ER of 42.5%/28%, (EER: 33%), followed by average pause duration with ER 35%/45.56%, (EER: 40%). Although the research features a closed-set comparison, which is not very common in forensic reality, the results are still relevant for forensic phoneticians working on criminal cases or as expert witnesses. This study pioneers in forensically comparing Serbian and English as well as in forensically testing temporal parameters on bilingual speakers. Further research should focus on comparing two stress-timed or two syllable-timed languages to test whether they will be more comparable in terms of temporal aspects of speech. Celem badania jest analiza możliwości identyfikacji mówcy kryminalistycznego i sądowego podczas zadawania pytań w różnych językach, z wykorzystaniem parametrów temporalnych. (wskaźnik artykulcji, wskaźnik mowy, stopień niezdecydowania, odsetek pauz, średnia czas trwania pauzy). Korpus obejmuje 10 mówców kobiet z Serbii, które znają język angielksi na poziomie zaawwansowanym. Patrametry są badane z wykorzystaniem beayesowskiego wzoru wskaźnika prawdopodobieństwa w 40 parach tcyh samych mówców i w 230 parach różnych mówców, z uwzględnieniem szacunku wskaźnika błędu, równiego wskaźnika błędu i Całościowego Wskaźnika Prawdopodobieństwa. badanie ma charakter pionierski w zakresie językoznawstwa sądowego i kryminalistycznego por1) ónawczego w parze jezyka serbskiego i angielskiego, podobnie, jak analiza parametrów temporalnych mówców bilingwalnych. Dalsze badania inny skoncentrować się na porównaniu języków z rytmem akcentowym i z rytmem sylabicznym.
Temporal Parameters of Spontaneous Speech in Forensic Speaker Identification in Case of Language Mismatch: Serbian as L1 and English as L2
Celem badania jest analiza możliwości identyfikacji mówcy kryminalistycznego i sądowego podczas zadawania pytań w różnych językach, z wykorzystaniem parametrów temporalnych. (wskaźnik artykulcji, wskaźnik mowy, stopień niezdecydowania, odsetek pauz, średnia czas trwania pauzy). Korpus obejmuje 10 mówców kobiet z Serbii, które znają język angielksi na poziomie zaawwansowanym. Patrametry są badane z wykorzystaniem beayesowskiego wzoru wskaźnika prawdopodobieństwa w 40 parach tcyh samych mówców i w 230 parach różnych mówców, z uwzględnieniem szacunku wskaźnika błędu, równiego wskaźnika błędu i Całościowego Wskaźnika Prawdopodobieństwa. badanie ma charakter pionierski w zakresie językoznawstwa sądowego i kryminalistycznego por1) ónawczego w parze jezyka serbskiego i angielskiego, podobnie, jak analiza parametrów temporalnych mówców bilingwalnych. Dalsze badania inny skoncentrować się na porównaniu języków z rytmem akcentowym i z rytmem sylabicznym. The purpose of the research is to examine the possibility of forensic speaker identification if question and suspect sample are in different languages using temporal parameters (articulation rate, speaking rate, degree of hesitancy, percentage of pauses, average pause duration). The corpus includes 10 female native speakers of Serbian who are proficient in English. The parameters are tested using Bayesian likelihood ratio formula in 40 same-speaker and 360 different-speaker pairs, including estimation of error rates, equal error rates and Overall Likelihood Ratio. One-way ANOVA is performed to determine whether inter-speaker variability is higher than intra- speaker variability across languages. The most successful discriminant is degree of hesitancy with ER of 42.5%/28%, (EER: 33%), followed by average pause duration with ER 35%/45.56%, (EER: 40%). Although the research features a closed-set comparison, which is not very common in forensic reality, the results are still relevant for forensic phoneticians working on criminal cases or as expert witnesses. This study pioneers in forensically comparing Serbian and English as well as in forensically testing temporal parameters on bilingual speakers. Further research should focus on comparing two stress-timed or two syllable-timed languages to test whether they will be more comparable in terms of temporal aspects of speech.
Regularizing Contrastive Predictive Coding for Speech Applications
Self-supervised methods such as Contrastive predictive Coding (CPC) have
greatly improved the quality of the unsupervised representations. These
representations significantly reduce the amount of labeled data needed for
downstream task performance, such as automatic speech recognition. CPC learns
representations by learning to predict future frames given current frames.
Based on the observation that the acoustic information, e.g., phones, changes
slower than the feature extraction rate in CPC, we propose regularization
techniques that impose slowness constraints on the features. Here we propose
two regularization techniques: Self-expressing constraint and Left-or-Right
regularization. We evaluate the proposed model on ABX and linear phone
classification tasks, acoustic unit discovery, and automatic speech
recognition. The regularized CPC trained on 100 hours of unlabeled data matches
the performance of the baseline CPC trained on 360 hours of unlabeled data. We
also show that our regularization techniques are complementary to data
augmentation and can further boost the system's performance. In monolingual,
cross-lingual, or multilingual settings, with/without data augmentation,
regardless of the amount of data used for training, our regularized models
outperformed the baseline CPC models on the ABX task
- …