59 research outputs found
SemEval-2016 task 5 : aspect based sentiment analysis
International audienceThis paper describes the SemEval 2016 shared task on Aspect Based Sentiment Analysis (ABSA), a continuation of the respective tasks of 2014 and 2015. In its third year, the task provided 19 training and 20 testing datasets for 8 languages and 7 domains, as well as a common evaluation procedure. From these datasets, 25 were for sentence-level and 14 for text-level ABSA; the latter was introduced for the first time as a subtask in SemEval. The task attracted 245 submissions from 29 teams
Joint Deep Modeling of Users and Items Using Reviews for Recommendation
A large amount of information exists in reviews written by users. This source
of information has been ignored by most of the current recommender systems
while it can potentially alleviate the sparsity problem and improve the quality
of recommendations. In this paper, we present a deep model to learn item
properties and user behaviors jointly from review text. The proposed model,
named Deep Cooperative Neural Networks (DeepCoNN), consists of two parallel
neural networks coupled in the last layers. One of the networks focuses on
learning user behaviors exploiting reviews written by the user, and the other
one learns item properties from the reviews written for the item. A shared
layer is introduced on the top to couple these two networks together. The
shared layer enables latent factors learned for users and items to interact
with each other in a manner similar to factorization machine techniques.
Experimental results demonstrate that DeepCoNN significantly outperforms all
baseline recommender systems on a variety of datasets.Comment: WSDM 201
Network Model Selection for Task-Focused Attributed Network Inference
Networks are models representing relationships between entities. Often these
relationships are explicitly given, or we must learn a representation which
generalizes and predicts observed behavior in underlying individual data (e.g.
attributes or labels). Whether given or inferred, choosing the best
representation affects subsequent tasks and questions on the network. This work
focuses on model selection to evaluate network representations from data,
focusing on fundamental predictive tasks on networks. We present a modular
methodology using general, interpretable network models, task neighborhood
functions found across domains, and several criteria for robust model
selection. We demonstrate our methodology on three online user activity
datasets and show that network model selection for the appropriate network task
vs. an alternate task increases performance by an order of magnitude in our
experiments
How Useful are Reviews for Recommendation? A Critical Review and Potential Improvements
We investigate a growing body of work that seeks to improve recommender
systems through the use of review text. Generally, these papers argue that
since reviews 'explain' users' opinions, they ought to be useful to infer the
underlying dimensions that predict ratings or purchases. Schemes to incorporate
reviews range from simple regularizers to neural network approaches. Our
initial findings reveal several discrepancies in reported results, partly due
to (e.g.) copying results across papers despite changes in experimental
settings or data pre-processing. First, we attempt a comprehensive analysis to
resolve these ambiguities. Further investigation calls for discussion on a much
larger problem about the "importance" of user reviews for recommendation.
Through a wide range of experiments, we observe several cases where
state-of-the-art methods fail to outperform existing baselines, especially as
we deviate from a few narrowly-defined settings where reviews are useful. We
conclude by providing hypotheses for our observations, that seek to
characterize under what conditions reviews are likely to be helpful. Through
this work, we aim to evaluate the direction in which the field is progressing
and encourage robust empirical evaluation.Comment: 4 pages, 3 figures. Accepted for publication at SIGIR '2
- …