41 research outputs found
Time-Sensitive Collaborative Filtering Algorithm with Feature Stability
In the recommendation system, the collaborative filtering algorithm is widely used. However, there are lots of problems which need to be solved in recommendation field, such as low precision, the long tail of items. In this paper, we design an algorithm called FSTS for solving the low precision and the long tail. We adopt stability variables and time-sensitive factors to solve the problem of user's interest drift, and improve the accuracy of prediction. Experiments show that, compared with Item-CF, the precision, the recall, the coverage and the popularity have been significantly improved by FSTS algorithm. At the same time, it can mine long tail items and alleviate the phenomenon of the long tail
Performance of Hyperbolic Geometry Models on Top-N Recommendation Tasks
We introduce a simple autoencoder based on hyperbolic geometry for solving
standard collaborative filtering problem. In contrast to many modern deep
learning techniques, we build our solution using only a single hidden layer.
Remarkably, even with such a minimalistic approach, we not only outperform the
Euclidean counterpart but also achieve a competitive performance with respect
to the current state-of-the-art. We additionally explore the effects of space
curvature on the quality of hyperbolic models and propose an efficient
data-driven method for estimating its optimal value.Comment: Accepted at ACM RecSys 2020; 7 page
SCIENTIFIC ARTICLES RECOMMENDATION SYSTEM BASED ON USERâS RELATEDNESS USING ITEM-BASED COLLABORATIVE FILTERING METHOD
Scientific article recommendation still remains one of the challenging issues in education, including learning process. Difficulties in finding related articles from research history and research interest have been experienced by students in collage affecting the duration of study and research time. This paper proposed a new solution by building a search engine to collect and to recommend articles related to student research topics. The system combined the web scraping method as an article data retrieval technique on google scholar and item-based collaborative filtering to recommend the article. Parameters result produced based on items of userâs history, including item-searched, clicked, and downloaded. The system was built on a web-based scientific article recommendation system using python programming language. This system recommends articles based on the preferences of users and other users who are affiliated and who have an interest in the same item. This research showed that the validation result from the system obtained a recommendation accuracy value over 0.516801. The percentage of the RMSE error value of the recommendation system is 8.62%, or in other words that the accuracy of the recommendation system is 91.28%
Enhancing explainability and scrutability of recommender systems
Our increasing reliance on complex algorithms for recommendations calls for models and methods for explainable, scrutable, and trustworthy AI. While explainability is required for understanding the relationships between model inputs and outputs, a scrutable system allows us to modify its behavior as desired. These properties help bridge the gap between our expectations and the algorithmâs behavior and accordingly boost our trust in AI. Aiming to cope with information overload, recommender systems play a crucial role in ďŹltering content (such as products, news, songs, and movies) and shaping a personalized experience for their users. Consequently, there has been a growing demand from the information consumers to receive proper explanations for their personalized recommendations. These explanations aim at helping users understand why certain items are recommended to them and how their previous inputs to the system relate to the generation of such recommendations. Besides, in the event of receiving undesirable content, explanations could possibly contain valuable information as to how the systemâs behavior can be modiďŹed accordingly. In this thesis, we present our contributions towards explainability and scrutability of recommender systems: ⢠We introduce a user-centric framework, FAIRY, for discovering and ranking post-hoc explanations for the social feeds generated by black-box platforms. These explanations reveal relationships between usersâ proďŹles and their feed items and are extracted from the local interaction graphs of users. FAIRY employs a learning-to-rank (LTR) method to score candidate explanations based on their relevance and surprisal. ⢠We propose a method, PRINCE, to facilitate provider-side explainability in graph-based recommender systems that use personalized PageRank at their core. PRINCE explanations are comprehensible for users, because they present subsets of the userâs prior actions responsible for the received recommendations. PRINCE operates in a counterfactual setup and builds on a polynomial-time algorithm for ďŹnding the smallest counterfactual explanations. ⢠We propose a human-in-the-loop framework, ELIXIR, for enhancing scrutability and subsequently the recommendation models by leveraging user feedback on explanations. ELIXIR enables recommender systems to collect user feedback on pairs of recommendations and explanations. The feedback is incorporated into the model by imposing a soft constraint for learning user-speciďŹc item representations. We evaluate all proposed models and methods with real user studies and demonstrate their beneďŹts at achieving explainability and scrutability in recommender systems.Unsere zunehmende Abhängigkeit von komplexen Algorithmen fĂźr maschinelle Empfehlungen erfordert Modelle und Methoden fĂźr erklärbare, nachvollziehbare und vertrauenswĂźrdige KI. Zum Verstehen der Beziehungen zwischen Modellein- und ausgaben muss KI erklärbar sein. MĂśchten wir das Verhalten des Systems hingegen nach unseren Vorstellungen ändern, muss dessen Entscheidungsprozess nachvollziehbar sein. Erklärbarkeit und Nachvollziehbarkeit von KI helfen uns dabei, die LĂźcke zwischen dem von uns erwarteten und dem tatsächlichen Verhalten der Algorithmen zu schlieĂen und unser Vertrauen in KI-Systeme entsprechend zu stärken. Um ein ĂbermaĂ an Informationen zu verhindern, spielen Empfehlungsdienste eine entscheidende Rolle um Inhalte (z.B. Produkten, Nachrichten, Musik und Filmen) zu ďŹltern und deren Benutzern eine personalisierte Erfahrung zu bieten. Infolgedessen erheben immer mehr In- formationskonsumenten Anspruch auf angemessene Erklärungen fĂźr deren personalisierte Empfehlungen. Diese Erklärungen sollen den Benutzern helfen zu verstehen, warum ihnen bestimmte Dinge empfohlen wurden und wie sich ihre frĂźheren Eingaben in das System auf die Generierung solcher Empfehlungen auswirken. AuĂerdem kĂśnnen Erklärungen fĂźr den Fall, dass unerwĂźnschte Inhalte empfohlen werden, wertvolle Informationen darĂźber enthalten, wie das Verhalten des Systems entsprechend geändert werden kann. In dieser Dissertation stellen wir unsere Beiträge zu Erklärbarkeit und Nachvollziehbarkeit von Empfehlungsdiensten vor. ⢠Mit FAIRY stellen wir ein benutzerzentriertes Framework vor, mit dem post-hoc Erklärungen fĂźr die von Black-Box-Plattformen generierten sozialen Feeds entdeckt und bewertet werden kĂśnnen. Diese Erklärungen zeigen Beziehungen zwischen BenutzerproďŹlen und deren Feeds auf und werden aus den lokalen Interaktionsgraphen der Benutzer extrahiert. FAIRY verwendet eine LTR-Methode (Learning-to-Rank), um die Erklärungen anhand ihrer Relevanz und ihres Grads unerwarteter Empfehlungen zu bewerten. ⢠Mit der PRINCE-Methode erleichtern wir das anbieterseitige Generieren von Erklärungen fĂźr PageRank-basierte Empfehlungsdienste. PRINCE-Erklärungen sind fĂźr Benutzer verständlich, da sie Teilmengen frĂźherer Nutzerinteraktionen darstellen, die fĂźr die erhaltenen Empfehlungen verantwortlich sind. PRINCE-Erklärungen sind somit kausaler Natur und werden von einem Algorithmus mit polynomieller Laufzeit erzeugt , um präzise Erklärungen zu ďŹnden. ⢠Wir präsentieren ein Human-in-the-Loop-Framework, ELIXIR, um die Nachvollziehbarkeit der Empfehlungsmodelle und die Qualität der Empfehlungen zu verbessern. Mit ELIXIR kĂśnnen Empfehlungsdienste Benutzerfeedback zu Empfehlungen und Erklärungen sammeln. Das Feedback wird in das Modell einbezogen, indem benutzerspeziďŹscher Einbettungen von Objekten gelernt werden. Wir evaluieren alle Modelle und Methoden in Benutzerstudien und demonstrieren ihren Nutzen hinsichtlich Erklärbarkeit und Nachvollziehbarkeit von Empfehlungsdiensten
Exploiting relational tag expansion for dynamic user profile in a tag-aware ranking recommender system
A tag-aware recommender system (TRS) presents the challenge of tag sparsity in a user profile. Previous work focuses on expanding similar tags and does not link the tags with corresponding resources, therefore leading to a static user profile in the recommendation. In this article, we have proposed a new social tag expansion model (STEM) to generate a dynamic user profile to improve the recommendation performance. Instead of simply including most relevant tags, the new model focuses on the completeness of a user profile through expanding tags by exploiting their relations and includes a sufficient set of tags to alleviate the tag sparsity problem. The novel STEM-based TRS contains three operations: (1) Tag cloud generation discovers potentially relevant tags in an application domain; (2) Tag expansion finds a sufficient set of tags upon original tags; and (3) User profile refactoring builds a dynamic user profile and determines the weights of the extended tags in the profile. We analysed the STEM property in terms of recommendation accuracy and demonstrated its performance through extensive experiments over multiple datasets. The analysis and experimental results showed that the new STEM technique was able to correctly find a sufficient set of tags and to improve the recommendation accuracy by solving the tag sparsity problem. At this point, this technique has consistently outperformed state-of-art tag-aware recommendation methods in these extensive experiments