719,290 research outputs found
A Cognitive Science Reasoning in Recognition of Emotions in Audio-Visual Speech
In this report we summarize the state-of-the-art of speech emotion recognition from the signal
processing point of view. On the bases of multi-corporal experiments with machine-learning classifiers, the
observation is made that existing approaches for supervised machine learning lead to database dependent
classifiers which can not be applied for multi-language speech emotion recognition without additional training
because they discriminate the emotion classes following the used training language. As there are experimental
results showing that Humans can perform language independent categorisation, we made a parallel between
machine recognition and the cognitive process and tried to discover the sources of these divergent results. The
analysis suggests that the main difference is that the speech perception allows extraction of language
independent features although language dependent features are incorporated in all levels of the speech signal
and play as a strong discriminative function in human perception. Based on several results in related domains, we
have suggested that in addition, the cognitive process of emotion-recognition is based on categorisation, assisted
by some hierarchical structure of the emotional categories, existing in the cognitive space of all humans. We
propose a strategy for developing language independent machine emotion recognition, related to the
identification of language independent speech features and the use of additional information from visual
(expression) features
Code-Switching without Switching: Language Agnostic End-to-End Speech Translation
We propose a) a Language Agnostic end-to-end Speech Translation model (LAST),
and b) a data augmentation strategy to increase code-switching (CS)
performance. With increasing globalization, multiple languages are increasingly
used interchangeably during fluent speech. Such CS complicates traditional
speech recognition and translation, as we must recognize which language was
spoken first and then apply a language-dependent recognizer and subsequent
translation component to generate the desired target language output. Such a
pipeline introduces latency and errors. In this paper, we eliminate the need
for that, by treating speech recognition and translation as one unified
end-to-end speech translation problem. By training LAST with both input
languages, we decode speech into one target language, regardless of the input
language. LAST delivers comparable recognition and speech translation accuracy
in monolingual usage, while reducing latency and error rate considerably when
CS is observed
A Multilingual Perspective on Reading—Investigating Strategies of Irish Students Learning French
Our aim here is to investigate reading in a foreign language from a multilingual perspective. Much research has focused on first- and second-language reading, especially the important role played by strategy deployment in helping readers to make meaning from texts in different languages. Less emphasis has been placed, however, on how bilinguals approach reading in a new language and how they harness their bilingual experience when reading in this new language. We thus investigate strategy deployment by pupils from English- and Irish-medium schools in Ireland who learn French. We compare patterns of strategy deployment in reading in Irish and French and put forward examples where experience with reading in Irish potentially benefits foreign language reading. Findings point towards the need to foster use of previous language experience through strategy instruction as part of a move towards greater recognition of the role of multilingual language experience at different levels of education
- …