26,762 research outputs found
Multimodal Content Analysis for Effective Advertisements on YouTube
The rapid advances in e-commerce and Web 2.0 technologies have greatly
increased the impact of commercial advertisements on the general public. As a
key enabling technology, a multitude of recommender systems exists which
analyzes user features and browsing patterns to recommend appealing
advertisements to users. In this work, we seek to study the characteristics or
attributes that characterize an effective advertisement and recommend a useful
set of features to aid the designing and production processes of commercial
advertisements. We analyze the temporal patterns from multimedia content of
advertisement videos including auditory, visual and textual components, and
study their individual roles and synergies in the success of an advertisement.
The objective of this work is then to measure the effectiveness of an
advertisement, and to recommend a useful set of features to advertisement
designers to make it more successful and approachable to users. Our proposed
framework employs the signal processing technique of cross modality feature
learning where data streams from different components are employed to train
separate neural network models and are then fused together to learn a shared
representation. Subsequently, a neural network model trained on this joint
feature embedding representation is utilized as a classifier to predict
advertisement effectiveness. We validate our approach using subjective ratings
from a dedicated user study, the sentiment strength of online viewer comments,
and a viewer opinion metric of the ratio of the Likes and Views received by
each advertisement from an online platform.Comment: 11 pages, 5 figures, ICDM 201
IEST: WASSA-2018 Implicit Emotions Shared Task
Past shared tasks on emotions use data with both overt expressions of
emotions (I am so happy to see you!) as well as subtle expressions where the
emotions have to be inferred, for instance from event descriptions. Further,
most datasets do not focus on the cause or the stimulus of the emotion. Here,
for the first time, we propose a shared task where systems have to predict the
emotions in a large automatically labeled dataset of tweets without access to
words denoting emotions. Based on this intention, we call this the Implicit
Emotion Shared Task (IEST) because the systems have to infer the emotion mostly
from the context. Every tweet has an occurrence of an explicit emotion word
that is masked. The tweets are collected in a manner such that they are likely
to include a description of the cause of the emotion - the stimulus.
Altogether, 30 teams submitted results which range from macro F1 scores of 21 %
to 71 %. The baseline (MaxEnt bag of words and bigrams) obtains an F1 score of
60 % which was available to the participants during the development phase. A
study with human annotators suggests that automatic methods outperform human
predictions, possibly by honing into subtle textual clues not used by humans.
Corpora, resources, and results are available at the shared task website at
http://implicitemotions.wassa2018.com.Comment: Accepted at Proceedings of the 9th Workshop on Computational
Approaches to Subjectivity, Sentiment and Social Media Analysi
- …