2 research outputs found
Multimodal multimodel emotion analysis as linked data
The lack of a standard emotion representation model
hinders emotion analysis due to the incompatibility of annotation
formats and models from different sources, tools and annotation
services. This is also a limiting factor for multimodal
analysis, since recognition services from different modalities
(audio, video, text) tend to have different representation models
(e. g., continuous vs. discrete emotions).
This work presents a multi-disciplinary effort to alleviate
this problem by formalizing conversion between emotion models.
The specific contributions are: i) a semantic representation
of emotion conversion; ii) an API proposal for services that
perform automatic conversion; iii) a reference implementation
of such a service; and iv) validation of the proposal through
use cases that integrate different emotion models and service
providers.The research leading to these results has received funding
from the European Union‘s Horizon 2020 Programme
research and innovation programme under grant agreement
No. 644632 (MixedEmotions)non-peer-reviewe
Multimodal multimodel emotion analysis as linked data
The lack of a standard emotion representation model
hinders emotion analysis due to the incompatibility of annotation
formats and models from different sources, tools and annotation
services. This is also a limiting factor for multimodal
analysis, since recognition services from different modalities
(audio, video, text) tend to have different representation models
(e. g., continuous vs. discrete emotions).
This work presents a multi-disciplinary effort to alleviate
this problem by formalizing conversion between emotion models.
The specific contributions are: i) a semantic representation
of emotion conversion; ii) an API proposal for services that
perform automatic conversion; iii) a reference implementation
of such a service; and iv) validation of the proposal through
use cases that integrate different emotion models and service
providers.The research leading to these results has received funding
from the European Union‘s Horizon 2020 Programme
research and innovation programme under grant agreement
No. 644632 (MixedEmotions