6,326 research outputs found
Learning Item Trees for Probabilistic Modelling of Implicit Feedback
User preferences for items can be inferred from either explicit feedback,
such as item ratings, or implicit feedback, such as rental histories. Research
in collaborative filtering has concentrated on explicit feedback, resulting in
the development of accurate and scalable models. However, since explicit
feedback is often difficult to collect it is important to develop effective
models that take advantage of the more widely available implicit feedback. We
introduce a probabilistic approach to collaborative filtering with implicit
feedback based on modelling the user's item selection process. In the interests
of scalability, we restrict our attention to tree-structured distributions over
items and develop a principled and efficient algorithm for learning item trees
from data. We also identify a problem with a widely used protocol for
evaluating implicit feedback models and propose a way of addressing it using a
small quantity of explicit feedback data.Comment: 8 page
Exact and efficient top-K inference for multi-target prediction by querying separable linear relational models
Many complex multi-target prediction problems that concern large target
spaces are characterised by a need for efficient prediction strategies that
avoid the computation of predictions for all targets explicitly. Examples of
such problems emerge in several subfields of machine learning, such as
collaborative filtering, multi-label classification, dyadic prediction and
biological network inference. In this article we analyse efficient and exact
algorithms for computing the top- predictions in the above problem settings,
using a general class of models that we refer to as separable linear relational
models. We show how to use those inference algorithms, which are modifications
of well-known information retrieval methods, in a variety of machine learning
settings. Furthermore, we study the possibility of scoring items incompletely,
while still retaining an exact top-K retrieval. Experimental results in several
application domains reveal that the so-called threshold algorithm is very
scalable, performing often many orders of magnitude more efficiently than the
naive approach
MACHINE LEARNING AND CAUSALITY FOR INTERPRETABLE AND AUTOMATED DECISION MAKING
This abstract explores two key areas in decision science: automated and interpretable decision making. In the first part, we address challenges related to sparse user interaction data and high item turnover rates in recommender systems. We introduce a novel algorithm called Multi-View Interactive Collaborative Filtering (MV-ICTR) that integrates user-item ratings and contextual information, improving performance, particularly for cold-start scenarios. In the second part, we focus on Student Prescription Trees (SPTs), which are interpretable decision trees. These trees use a black box teacher model to predict counterfactuals based on observed covariates. We experiment with a Bayesian hierarchical binomial regression model as the teacher and employ statistical significance testing to control tree growth, ensuring interpretable decision trees. Overall, our research advances the field of decision science by addressing challenges in automated and interpretable decision making, offering solutions for improved performance and interpretability
DeepCF: A Unified Framework of Representation Learning and Matching Function Learning in Recommender System
In general, recommendation can be viewed as a matching problem, i.e., match
proper items for proper users. However, due to the huge semantic gap between
users and items, it's almost impossible to directly match users and items in
their initial representation spaces. To solve this problem, many methods have
been studied, which can be generally categorized into two types, i.e.,
representation learning-based CF methods and matching function learning-based
CF methods. Representation learning-based CF methods try to map users and items
into a common representation space. In this case, the higher similarity between
a user and an item in that space implies they match better. Matching function
learning-based CF methods try to directly learn the complex matching function
that maps user-item pairs to matching scores. Although both methods are well
developed, they suffer from two fundamental flaws, i.e., the limited
expressiveness of dot product and the weakness in capturing low-rank relations
respectively. To this end, we propose a general framework named DeepCF, short
for Deep Collaborative Filtering, to combine the strengths of the two types of
methods and overcome such flaws. Extensive experiments on four publicly
available datasets demonstrate the effectiveness of the proposed DeepCF
framework
A computational model of focused attention meditation and its transfer to a sustained attention task
- …