50,290 research outputs found
Time-Sensitive Collaborative Filtering Algorithm with Feature Stability
In the recommendation system, the collaborative filtering algorithm is widely used. However, there are lots of problems which need to be solved in recommendation field, such as low precision, the long tail of items. In this paper, we design an algorithm called FSTS for solving the low precision and the long tail. We adopt stability variables and time-sensitive factors to solve the problem of user's interest drift, and improve the accuracy of prediction. Experiments show that, compared with Item-CF, the precision, the recall, the coverage and the popularity have been significantly improved by FSTS algorithm. At the same time, it can mine long tail items and alleviate the phenomenon of the long tail
Stability of matrix factorization for collaborative filtering
We study the stability vis a vis adversarial noise of matrix factorization
algorithm for matrix completion. In particular, our results include: (I) we
bound the gap between the solution matrix of the factorization method and the
ground truth in terms of root mean square error; (II) we treat the matrix
factorization as a subspace fitting problem and analyze the difference between
the solution subspace and the ground truth; (III) we analyze the prediction
error of individual users based on the subspace stability. We apply these
results to the problem of collaborative filtering under manipulator attack,
which leads to useful insights and guidelines for collaborative filtering
system design.Comment: ICML201
PrivateJobMatch: A Privacy-Oriented Deferred Multi-Match Recommender System for Stable Employment
Coordination failure reduces match quality among employers and candidates in
the job market, resulting in a large number of unfilled positions and/or
unstable, short-term employment. Centralized job search engines provide a
platform that connects directly employers with job-seekers. However, they
require users to disclose a significant amount of personal data, i.e., build a
user profile, in order to provide meaningful recommendations. In this paper, we
present PrivateJobMatch -- a privacy-oriented deferred multi-match recommender
system -- which generates stable pairings while requiring users to provide only
a partial ranking of their preferences. PrivateJobMatch explores a series of
adaptations of the game-theoretic Gale-Shapley deferred-acceptance algorithm
which combine the flexibility of decentralized markets with the intelligence of
centralized matching. We identify the shortcomings of the original algorithm
when applied to a job market and propose novel solutions that rely on machine
learning techniques. Experimental results on real and synthetic data confirm
the benefits of the proposed algorithms across several quality measures. Over
the past year, we have implemented a PrivateJobMatch prototype and deployed it
in an active job market economy. Using the gathered real-user preference data,
we find that the match-recommendations are superior to a typical decentralized
job market---while requiring only a partial ranking of the user preferences.Comment: 45 pages, 28 figures, RecSys 201
Presumptuous aim attribution, conformity, and the ethics of artificial social cognition
Imagine you are casually browsing an online bookstore, looking for an interesting novel. Suppose the store predicts you will want to buy a particular novel: the one most chosen by people of your same age, gender, location, and occupational status. The store recommends the book, it appeals to you, and so you choose it. Central to this scenario is an automated prediction of what you desire. This article raises moral concerns about such predictions. More generally, this article examines the ethics of artificial social cognition—the ethical dimensions of attribution of mental states to humans by artificial systems. The focus is presumptuous aim attributions, which are defined here as aim attributions based crucially on the premise that the person in question will have aims like superficially similar people. Several everyday examples demonstrate that this sort of presumptuousness is already a familiar moral concern. The scope of this moral concern is extended by new technologies. In particular, recommender systems based on collaborative filtering are now commonly used to automatically recommend products and information to humans. Examination of these systems demonstrates that they naturally attribute aims presumptuously. This article presents two reservations about the widespread adoption of such systems. First, the severity of our antecedent moral concern about presumptuousness increases when aim attribution processes are automated and accelerated. Second, a foreseeable consequence of reliance on these systems is an unwarranted inducement of interpersonal conformity
- …