52,664 research outputs found
Adversarial Training in Affective Computing and Sentiment Analysis: Recent Advances and Perspectives
Over the past few years, adversarial training has become an extremely active
research topic and has been successfully applied to various Artificial
Intelligence (AI) domains. As a potentially crucial technique for the
development of the next generation of emotional AI systems, we herein provide a
comprehensive overview of the application of adversarial training to affective
computing and sentiment analysis. Various representative adversarial training
algorithms are explained and discussed accordingly, aimed at tackling diverse
challenges associated with emotional AI systems. Further, we highlight a range
of potential future research directions. We expect that this overview will help
facilitate the development of adversarial training for affective computing and
sentiment analysis in both the academic and industrial communities
Unsupervised Adaptation for Synthetic-to-Real Handwritten Word Recognition
Handwritten Text Recognition (HTR) is still a challenging problem because it
must deal with two important difficulties: the variability among writing
styles, and the scarcity of labelled data. To alleviate such problems,
synthetic data generation and data augmentation are typically used to train HTR
systems. However, training with such data produces encouraging but still
inaccurate transcriptions in real words. In this paper, we propose an
unsupervised writer adaptation approach that is able to automatically adjust a
generic handwritten word recognizer, fully trained with synthetic fonts,
towards a new incoming writer. We have experimentally validated our proposal
using five different datasets, covering several challenges (i) the document
source: modern and historic samples, which may involve paper degradation
problems; (ii) different handwriting styles: single and multiple writer
collections; and (iii) language, which involves different character
combinations. Across these challenging collections, we show that our system is
able to maintain its performance, thus, it provides a practical and generic
approach to deal with new document collections without requiring any expensive
and tedious manual annotation step.Comment: Accepted to WACV 202
LRMM: Learning to Recommend with Missing Modalities
Multimodal learning has shown promising performance in content-based
recommendation due to the auxiliary user and item information of multiple
modalities such as text and images. However, the problem of incomplete and
missing modality is rarely explored and most existing methods fail in learning
a recommendation model with missing or corrupted modalities. In this paper, we
propose LRMM, a novel framework that mitigates not only the problem of missing
modalities but also more generally the cold-start problem of recommender
systems. We propose modality dropout (m-drop) and a multimodal sequential
autoencoder (m-auto) to learn multimodal representations for complementing and
imputing missing modalities. Extensive experiments on real-world Amazon data
show that LRMM achieves state-of-the-art performance on rating prediction
tasks. More importantly, LRMM is more robust to previous methods in alleviating
data-sparsity and the cold-start problem.Comment: 11 pages, EMNLP 201
- …