Semantic interpretability of latent factors for recommendation

Abstract

Model-based approaches to recommendation have proven to be very accurate. Unfortunately, exploiting a latent space we miss references to the actual semantics of recommended items. In this extended abstract, we show how to initialize latent factors in Factorization Machines by using semantic features coming from a knowledge graph in order to train an interpretable model. Finally, we introduce and evaluate the semantic accuracy and robustness for the knowledge-aware interpretability of the model

    Similar works