35 research outputs found
FATREC Workshop on Responsible Recommendation Proceedings
We sought with this workshop, to foster a discussion of various topics that fall under the general umbrella of responsible recommendation: ethical considerations in recommendation, bias and discrimination in recommender systems, transparency and accountability, social impact of recommenders, user privacy, and other related concerns. Our goal was to encourage the community to think about how we build and study recommender systems in a socially-responsible manner.
Recommendation systems are increasingly impacting people\u27s decisions in different walks of life including commerce, employment, dating, health, education and governance. As the impact and scope of recommendations increase, developing systems that tackle issues of fairness, transparency and accountability becomes important. This workshop was held in the spirit of FATML (Fairness, Accountability, and Transparency in Machine Learning), DAT (Data and Algorithmic Transparency), and similar workshops in related communities. With Responsible Recommendation , we brought that conversation to RecSys
Exploring explanations for matrix factorization recommender systems (Position Paper)
In this paper we address the problem of finding explanations for collaborative filtering algorithms that use matrix factorization methods. We look for explanations that increase the transparency of the system. To do so, we propose two measures. First, we show a model that describes the contribution of each previous rating given by a user to the generated recommendation. Second, we measure then influence of changing each previous rating of a user on the outcome of the recommender system. We show that under the assumption that there are many more users in the system than there are items, we can efficiently generate each type of explanation by using linear approximations of the recommender system’s behavior for each user, and computing partial derivatives of predicted ratings with respect to each user’s provided ratings.http://scholarworks.boisestate.edu/fatrec/2017/1/7/Published versio
Academic Performance and Behavioral Patterns
Identifying the factors that influence academic performance is an essential
part of educational research. Previous studies have documented the importance
of personality traits, class attendance, and social network structure. Because
most of these analyses were based on a single behavioral aspect and/or small
sample sizes, there is currently no quantification of the interplay of these
factors. Here, we study the academic performance among a cohort of 538
undergraduate students forming a single, densely connected social network. Our
work is based on data collected using smartphones, which the students used as
their primary phones for two years. The availability of multi-channel data from
a single population allows us to directly compare the explanatory power of
individual and social characteristics. We find that the most informative
indicators of performance are based on social ties and that network indicators
result in better model performance than individual characteristics (including
both personality and class attendance). We confirm earlier findings that class
attendance is the most important predictor among individual characteristics.
Finally, our results suggest the presence of strong homophily and/or peer
effects among university students
Explaining Predictions from Tree-based Boosting Ensembles
Understanding how "black-box" models arrive at their predictions has sparked
significant interest from both within and outside the AI community. Our work
focuses on doing this by generating local explanations about individual
predictions for tree-based ensembles, specifically Gradient Boosting Decision
Trees (GBDTs). Given a correctly predicted instance in the training set, we
wish to generate a counterfactual explanation for this instance, that is, the
minimal perturbation of this instance such that the prediction flips to the
opposite class. Most existing methods for counterfactual explanations are (1)
model-agnostic, so they do not take into account the structure of the original
model, and/or (2) involve building a surrogate model on top of the original
model, which is not guaranteed to represent the original model accurately.
There exists a method specifically for random forests; we wish to extend this
method for GBDTs. This involves accounting for (1) the sequential dependency
between trees and (2) training on the negative gradients instead of the
original labels.Comment: SIGIR 2019: FACTS-IR Worksho