12,769 research outputs found
Affective Medicine: a review of Affective Computing efforts in Medical Informatics
Background: Affective computing (AC) is concerned with emotional interactions performed with and through computers. It is defined as βcomputing that relates to, arises from, or deliberately influences emotionsβ. AC enables investigation and understanding of the relation between human emotions and health as well as application of assistive and useful technologies in the medical domain. Objectives: 1) To review the general state of the art in AC and its applications in medicine, and 2) to establish synergies between the research communities of AC and medical informatics. Methods: Aspects related to the human affective state as a determinant of the human health are discussed, coupled with an illustration of significant AC research and related literature output. Moreover, affective communication channels are described and their range of application fields is explored through illustrative examples. Results: The presented conferences, European research projects and research publications illustrate the recent increase of interest in the AC area by the medical community. Tele-home healthcare, AmI, ubiquitous monitoring, e-learning and virtual communities with emotionally expressive characters for elderly or impaired people are few areas where the potential of AC has been realized and applications have emerged. Conclusions: A number of gaps can potentially be overcome through the synergy of AC and medical informatics. The application of AC technologies parallels the advancement of the existing state of the art and the introduction of new methods. The amount of work and projects reviewed in this paper witness an ambitious and optimistic synergetic future of the affective medicine field
Crowdsourcing a Word-Emotion Association Lexicon
Even though considerable attention has been given to the polarity of words
(positive and negative) and the creation of large polarity lexicons, research
in emotion analysis has had to rely on limited and small emotion lexicons. In
this paper we show how the combined strength and wisdom of the crowds can be
used to generate a large, high-quality, word-emotion and word-polarity
association lexicon quickly and inexpensively. We enumerate the challenges in
emotion annotation in a crowdsourcing scenario and propose solutions to address
them. Most notably, in addition to questions about emotions associated with
terms, we show how the inclusion of a word choice question can discourage
malicious data entry, help identify instances where the annotator may not be
familiar with the target term (allowing us to reject such annotations), and
help obtain annotations at sense level (rather than at word level). We
conducted experiments on how to formulate the emotion-annotation questions, and
show that asking if a term is associated with an emotion leads to markedly
higher inter-annotator agreement than that obtained by asking if a term evokes
an emotion
Agents for educational games and simulations
This book consists mainly of revised papers that were presented at the Agents for Educational Games and Simulation (AEGS) workshop held on May 2, 2011, as part of the Autonomous Agents and MultiAgent Systems (AAMAS) conference in Taipei, Taiwan. The 12 full papers presented were carefully reviewed and selected from various submissions. The papers are organized topical sections on middleware applications, dialogues and learning, adaption and convergence, and agent applications
Emotion Recognition using Fuzzy K-Means from Oriya Speech
Communication will be intelligible when conveyed message is interpreted in right-minded. Unfortunately, the rightminded interpretation of communicated message is possible for human-human communication but itβs laborious for humanmachine communication. It is due to the inherently blending of non-verbal contents such as emotion in vocal communication which leads to difficulty in human-machine interaction. In this research paper we have performed experiment to recognize emotions like anger, sadness, astonish, fear, happiness and neutral using fuzzy K-Means algorithm from Oriya elicited speech collected from 35 Oriya speaking people aged between 22- 58 years belonging to different provinces of Orissa. We have achieved the accuracy of 65.16% in recognizing above six mentioned emotions by incorporating mean pitch, first two formants, jitter, shimmer and energy as feature vectors for this research work. Emotion recognition has many vivid applications in different domains like call centers, spoken tutoring systems, spoken dialogue research, human-robotic interfaces etc
ΠΠΎΠ³Π½ΠΈΡΠΈΠ²Π½ΠΈ ΠΏΡΠΎΡΠ΅ΡΠΈ, Π΅ΠΌΠΎΡΠΈΠΈ ΠΈ ΠΈΠ½ΡΠ΅Π»ΠΈΠ³Π΅Π½ΡΠ½ΠΈ ΠΈΠ½ΡΠ΅ΡΡΠ΅ΡΡΠΈ
Π‘ΡΡΠ΄ΠΈΡΠ°ΡΠ° ΠΏΡΠ΅Π·Π΅Π½ΡΠΈΡΠ° ΠΈΡΡΡΠ°ΠΆΡΠ²Π°ΡΠ° ΠΎΠ΄ ΠΏΠΎΠ²Π΅ΡΠ΅ Π½Π°ΡΡΠ½ΠΈ Π΄ΠΈΡΡΠΈΠΏΠ»ΠΈΠ½ΠΈ, ΠΊΠ°ΠΊΠΎ Π²Π΅ΡΡΠ°ΡΠΊΠ° ΠΈΠ½ΡΠ΅Π»ΠΈΠ³Π΅Π½ΡΠΈΡΠ°, Π½Π΅Π²ΡΠΎΠ½Π°ΡΠΊΠΈ, ΠΏΡΠΈΡ
ΠΎΠ»ΠΎΠ³ΠΈΡΠ°, Π»ΠΈΠ½Π³Π²ΠΈΡΡΠΈΠΊΠ° ΠΈ ΡΠΈΠ»ΠΎΠ·ΠΎΡΠΈΡΠ°, ΠΊΠΎΠΈ ΠΈΠΌΠ°Π°Ρ ΠΏΠΎΡΠ΅Π½ΡΠΈΡΠ°Π» Π·Π° ΠΊΡΠ΅ΠΈΡΠ°ΡΠ΅ Π½Π° ΠΈΠ½ΡΠ΅Π»ΠΈΠ³Π΅Π½ΡΠ½ΠΈ Π°Π½ΡΡΠΎΠΏΠΎΠΌΠΎΡΡΠ½ΠΈ Π°Π³Π΅Π½ΡΠΈ ΠΈ ΠΈΠ½ΡΠ΅ΡΠ°ΠΊΡΠΈΠ²Π½ΠΈ ΡΠ΅Ρ
Π½ΠΎΠ»ΠΎΠ³ΠΈΠΈ. Π‘Π΅ ΡΠ°Π·Π³Π»Π΅Π΄ΡΠ²Π°Π°Ρ ΡΠΈΡΡΠ΅ΠΌΠΈΡΠ΅ ΠΎΠ΄ ΡΠΈΠΌΠ±ΠΎΠ»ΠΈΡΠΊΠ° ΠΈ ΠΊΠΎΠ½Π΅ΠΊΡΠΈΠΎΠ½ΠΈΡΡΠΈΡΠΊΠ° Π²Π΅ΡΡΠ°ΡΠΊΠ° ΠΈΠ½ΡΠ΅Π»ΠΈΠ³Π΅Π½ΡΠΈΡΠ° Π·Π° ΠΌΠΎΠ΄Π΅Π»ΠΈΡΠ°ΡΠ΅ Π½Π° ΡΠΎΠ²Π΅ΠΊΠΎΠ²ΠΈΡΠ΅ ΠΊΠΎΠ³Π½ΠΈΡΠΈΠ²Π½ΠΈ ΠΏΡΠΎΡΠ΅ΡΠΈ, ΠΌΠΈΡΠ»Π΅ΡΠ΅, Π΄ΠΎΠ½Π΅ΡΡΠ²Π°ΡΠ΅ ΠΎΠ΄Π»ΡΠΊΠΈ, ΠΌΠ΅ΠΌΠΎΡΠΈΡΠ° ΠΈ ΡΡΠ΅ΡΠ΅. Π‘Π΅ Π°Π½Π°Π»ΠΈΠ·ΠΈΡΠ°Π°Ρ ΠΌΠΎΠ΄Π΅Π»ΠΈΡΠ΅ Π²ΠΎ Π²Π΅ΡΡΠ°ΡΠΊΠ° ΠΈΠ½ΡΠ΅Π»ΠΈΠ³Π΅Π½ΡΠΈΡΠ° ΠΈ ΡΠΎΠ±ΠΎΡΠΈΠΊΠ° ΠΊΠΎΠΈ ΠΊΠΎΡΠΈΡΡΠ°Ρ Π΅ΠΌΠΎΡΠΈΠΈ ΠΊΠ°ΠΊΠΎ ΠΌΠ΅Ρ
Π°Π½ΠΈΠ·Π°ΠΌ Π·Π° ΠΊΠΎΠ½ΡΡΠΎΠ»Π° Π½Π° ΠΎΡΡΠ²Π°ΡΡΠ²Π°ΡΠ΅ Π½Π° ΡΠ΅Π»ΠΈΡΠ΅ Π½Π° ΡΠΎΠ±ΠΎΡΠΎΡ, ΠΊΠ°ΠΊΠΎ ΡΠ΅Π°ΠΊΡΠΈΡΠ° Π½Π° ΠΎΠ΄ΡΠ΅Π΄Π΅Π½ΠΈ ΡΠΈΡΡΠ°ΡΠΈΠΈ, Π·Π° ΠΎΠ΄ΡΠΆΡΠ²Π°ΡΠ΅ Π½Π° ΠΏΡΠΎΡΠ΅ΡΠΎΡ Π½Π° ΡΠΎΡΠΈΡΠ°Π»Π½Π° ΠΈΠ½ΡΠ΅ΡΠ°ΠΊΡΠΈΡΠ° ΠΈ Π·Π° ΡΠΎΠ·Π΄Π°Π²Π°ΡΠ΅ Π½Π° ΠΏΠΎΡΠ²Π΅ΡΠ»ΠΈΠ²ΠΈ Π°Π½ΡΡΠΎΠΏΠΎΡΠΌΡΠ½ΠΈ Π°Π³Π΅Π½ΡΠΈ.
ΠΡΠ΅Π·Π΅Π½ΡΠΈΡΠ°Π½ΠΈΡΠ΅ ΠΈΠ½ΡΠ΅ΡΠ΄ΠΈΡΡΠΈΠΏΠ»ΠΈΠ½Π°ΡΠ½ΠΈ ΠΌΠ΅ΡΠΎΠ΄ΠΎΠ»ΠΎΠ³ΠΈΠΈ ΠΈ ΠΊΠΎΠ½ΡΠ΅ΠΏΡΠΈ ΡΠ΅ ΠΌΠΎΡΠΈΠ²Π°ΡΠΈΡΠ° Π·Π° ΡΠΎΠ·Π΄Π°Π²Π°ΡΠ΅ Π½Π° Π°Π½ΠΈΠΌΠΈΡΠ°Π½ΠΈ Π°Π³Π΅Π½ΡΠΈ ΠΊΠΎΠΈ ΠΊΠΎΡΠΈΡΡΠ°Ρ Π³ΠΎΠ²ΠΎΡ, Π³Π΅ΡΡΠΎΠ²ΠΈ, ΠΈΠ½ΡΠΎΠ½Π°ΡΠΈΡΠ° ΠΈ Π΄ΡΡΠ³ΠΈ Π½Π΅Π²Π΅ΡΠ±Π°Π»Π½ΠΈ ΠΌΠΎΠ΄Π°Π»ΠΈΡΠ΅ΡΠΈ ΠΏΡΠΈ ΠΊΠΎΠ½Π²Π΅ΡΠ·Π°ΡΠΈΡΠ° ΡΠΎ ΠΊΠΎΡΠΈΡΠ½ΠΈΡΠΈΡΠ΅ Π²ΠΎ ΠΈΠ½ΡΠ΅Π»ΠΈΠ³Π΅Π½ΡΠ½ΠΈΡΠ΅ ΠΈΠ½ΡΠ΅ΡΡΠ΅ΡΡΠΈ
VICA, a visual counseling agent for emotional distress
We present VICA, a Visual Counseling Agent designed to create an engaging multimedia face-to-face interaction. VICA is a human-friendly agent equipped with high-performance voice conversation designed to help psychologically stressed users, to offload their emotional burden. Such users specifically include non-computer-savvy elderly persons or clients. Our agent builds replies exploiting interlocutor\u2019s utterances expressing such as wishes, obstacles, emotions, etc. Statements asking for confirmation, details, emotional summary, or relations among such expressions are added to the utterances. We claim that VICA is suitable for positive counseling scenarios where multimedia specifically high-performance voice communication is instrumental for even the old or digital divided users to continue dialogue towards their self-awareness. To prove this claim, VICA\u2019s effect is evaluated with respect to a previous text-based counseling agent CRECA and ELIZA including its successors. An experiment involving 14 subjects shows VICA effects as follows: (i) the dialogue continuation (CPS: Conversation-turns Per Session) of VICA for the older half (age > 40) substantially improved 53% to CRECA and 71% to ELIZA. (ii) VICA\u2019s capability to foster peace of mind and other positive feelings was assessed with a very high score of 5 or 6 mostly, out of 7 stages of the Likert scale, again by the older. Compared on average, such capability of VICA for the older is 5.14 while CRECA (all subjects are young students, age < 25) is 4.50, ELIZA is 3.50, and the best of ELIZA\u2019s successors for the older (> 25) is 4.41
- β¦