2,382 research outputs found
Latent Relational Metric Learning via Memory-based Attention for Collaborative Ranking
This paper proposes a new neural architecture for collaborative ranking with
implicit feedback. Our model, LRML (\textit{Latent Relational Metric Learning})
is a novel metric learning approach for recommendation. More specifically,
instead of simple push-pull mechanisms between user and item pairs, we propose
to learn latent relations that describe each user item interaction. This helps
to alleviate the potential geometric inflexibility of existing metric learing
approaches. This enables not only better performance but also a greater extent
of modeling capability, allowing our model to scale to a larger number of
interactions. In order to do so, we employ a augmented memory module and learn
to attend over these memory blocks to construct latent relations. The
memory-based attention module is controlled by the user-item interaction,
making the learned relation vector specific to each user-item pair. Hence, this
can be interpreted as learning an exclusive and optimal relational translation
for each user-item interaction. The proposed architecture demonstrates the
state-of-the-art performance across multiple recommendation benchmarks. LRML
outperforms other metric learning models by in terms of Hits@10 and
nDCG@10 on large datasets such as Netflix and MovieLens20M. Moreover,
qualitative studies also demonstrate evidence that our proposed model is able
to infer and encode explicit sentiment, temporal and attribute information
despite being only trained on implicit feedback. As such, this ascertains the
ability of LRML to uncover hidden relational structure within implicit
datasets.Comment: WWW 201
Knowing What, How and Why: A Near Complete Solution for Aspect-based Sentiment Analysis
Target-based sentiment analysis or aspect-based sentiment analysis (ABSA)
refers to addressing various sentiment analysis tasks at a fine-grained level,
which includes but is not limited to aspect extraction, aspect sentiment
classification, and opinion extraction. There exist many solvers of the above
individual subtasks or a combination of two subtasks, and they can work
together to tell a complete story, i.e. the discussed aspect, the sentiment on
it, and the cause of the sentiment. However, no previous ABSA research tried to
provide a complete solution in one shot. In this paper, we introduce a new
subtask under ABSA, named aspect sentiment triplet extraction (ASTE).
Particularly, a solver of this task needs to extract triplets (What, How, Why)
from the inputs, which show WHAT the targeted aspects are, HOW their sentiment
polarities are and WHY they have such polarities (i.e. opinion reasons). For
instance, one triplet from "Waiters are very friendly and the pasta is simply
average" could be ('Waiters', positive, 'friendly'). We propose a two-stage
framework to address this task. The first stage predicts what, how and why in a
unified model, and then the second stage pairs up the predicted what (how) and
why from the first stage to output triplets. In the experiments, our framework
has set a benchmark performance in this novel triplet extraction task.
Meanwhile, it outperforms a few strong baselines adapted from state-of-the-art
related methods.Comment: This paper is accepted in AAAI 202
- …