1 research outputs found
Look, Read and Feel: Benchmarking Ads Understanding with Multimodal Multitask Learning
Given the massive market of advertising and the sharply increasing online
multimedia content (such as videos), it is now fashionable to promote
advertisements (ads) together with the multimedia content. It is exhausted to
find relevant ads to match the provided content manually, and hence, some
automatic advertising techniques are developed. Since ads are usually hard to
understand only according to its visual appearance due to the contained visual
metaphor, some other modalities, such as the contained texts, should be
exploited for understanding. To further improve user experience, it is
necessary to understand both the topic and sentiment of the ads. This motivates
us to develop a novel deep multimodal multitask framework to integrate multiple
modalities to achieve effective topic and sentiment prediction simultaneously
for ads understanding. In particular, our model first extracts multimodal
information from ads and learn high-level and comparable representations. The
visual metaphor of the ad is decoded in an unsupervised manner. The obtained
representations are then fed into the proposed hierarchical multimodal
attention modules to learn task-specific representations for final prediction.
A multitask loss function is also designed to train both the topic and
sentiment prediction models jointly in an end-to-end manner. We conduct
extensive experiments on the latest and large advertisement dataset and achieve
state-of-the-art performance for both prediction tasks. The obtained results
could be utilized as a benchmark for ads understanding.Comment: 8 pages, 5 figure