1,878 research outputs found
Network Model Selection for Task-Focused Attributed Network Inference
Networks are models representing relationships between entities. Often these
relationships are explicitly given, or we must learn a representation which
generalizes and predicts observed behavior in underlying individual data (e.g.
attributes or labels). Whether given or inferred, choosing the best
representation affects subsequent tasks and questions on the network. This work
focuses on model selection to evaluate network representations from data,
focusing on fundamental predictive tasks on networks. We present a modular
methodology using general, interpretable network models, task neighborhood
functions found across domains, and several criteria for robust model
selection. We demonstrate our methodology on three online user activity
datasets and show that network model selection for the appropriate network task
vs. an alternate task increases performance by an order of magnitude in our
experiments
PReFacTO: Preference Relations Based Factor Model with Topic Awareness and Offset
Recommendation systems create personalized list of items that
might interest the user by analyzing the user’s history of past purchases
and/or consumption. For rating based systems, most of the
traditional methods for recommendation focus on the absolute ratings
provided by the users to the items. In this paper, we extend the
traditional Matrix Factorization approach for recommendation and
propose pairwise relation based factor modeling. While modeling
the items in the system, the use of pairwise preferences allow information
flow between the items through the preference relations
as an additional information. Item feedbacks are available in the
form of reviews apart from the rating information. The reviews
have textual information that can be really helpful to represent
the item’s latent feature vector appropriately. We perform topic
modeling of the item reviews and use the topic vectors to guide the
joint factor modeling of the users and items and learn their final
representations. The proposed method shows promising results in
comparison to the state-of-the-art methods in our experiments
Reinforced Path Reasoning for Counterfactual Explainable Recommendation
Counterfactual explanations interpret the recommendation mechanism via
exploring how minimal alterations on items or users affect the recommendation
decisions. Existing counterfactual explainable approaches face huge search
space and their explanations are either action-based (e.g., user click) or
aspect-based (i.e., item description). We believe item attribute-based
explanations are more intuitive and persuadable for users since they explain by
fine-grained item demographic features (e.g., brand). Moreover, counterfactual
explanation could enhance recommendations by filtering out negative items.
In this work, we propose a novel Counterfactual Explainable Recommendation
(CERec) to generate item attribute-based counterfactual explanations meanwhile
to boost recommendation performance. Our CERec optimizes an explanation policy
upon uniformly searching candidate counterfactuals within a reinforcement
learning environment. We reduce the huge search space with an adaptive path
sampler by using rich context information of a given knowledge graph. We also
deploy the explanation policy to a recommendation model to enhance the
recommendation. Extensive explainability and recommendation evaluations
demonstrate CERec's ability to provide explanations consistent with user
preferences and maintain improved recommendations. We release our code at
https://github.com/Chrystalii/CERec
- …