24 research outputs found

    Position Statement | Explanations in Recommender Systems

    No full text
    Introduction Automated collaborative ltering (ACF) systems predict a person's anity for unexperienced items based on the past experiences of that person and the past and current experiences of a community of people. ACF systems have been successful in research, with projects such as GroupLens[7], Ringo[10], and Video Recommender[4] gaining large followings on the Internet. Commercially, some of the highest pro le web sites like Amazon.com, CDNow.com, and MovieFinder.com have made successful use of ACF technology. While automated collaborative ltering systems have proven to be generally accurate, their failure rates still remain unacceptable for certain domains or individuals. While a user may be willing to risk purchasing a music CD based on the recommendation of an ACF system, he will probably not risk choosing a honeymoon vacation spot based on such a recommendation. However, there is no reason why the higher- risk domains should not bene t from ACF technology. There are sev

    General Terms

    No full text
    Collaborative Filtering (CF) systems have been researched for over a decade as a tool to deal with information overload. At the heart of these systems are the algorithms which generate the predictions and recommendations. In this article we empirically demonstrate that two of the most acclaimed CF recommendation algorithms have flaws that result in a dramatically unacceptable user experience. In response, we introduce a new Belief Distribution Algorithm that overcomes these flaws and provides substantially richer user modeling. The Belief Distribution Algorithm retains the qualities of nearest-neighbor algorithms which have performed well in the past, yet produces predictions of belief distributions across rating values rather than a point rating value. In addition, we illustrate how the exclusive use of the mean absolute error metric has concealed these flaws for so long, and we propose the use of a modified Precision metric for more accurately evaluating the user experience

    Click data as implicit relevance feedback

    No full text
    Abstract Search sessions consist of a person presenting a query to a search engine, followed by that person examining the search results, selecting some of those search results for further review, possibly following some series of hyperlinks, and perhaps backtracking to previously viewed pages in the session. The series of pages selected for viewing in a search session, sometimes called the click data, is intuitively a source of relevance feedback information to the search engine. We are interested in how that relevance feedback can be used to improve the search results quality for all users, not just the current user. For example, the search engine could learn which documents are frequently visited when certain search queries are given. In this article, we address three issues related to using click data as implicit relevance feedback: (1) How click data beyond the search results page might be more reliable than just the clicks from the search results page; (2) Whether we can further subselect from this click data to get even more reliable relevance feedback; and (3) How the reliability of click data for relevance feedback changes when the goal becomes finding one document for the user that completely meets their information needs (if possible). We refer to these documents as the ones that are strictly relevant to the query. Our conclusions are based on empirical data from a live website with manual assessment of relevance. We found that considering all of the click data in a search session as relevance feedback has the potential to increase both precision and recall of the feedback data. We further found that, when the goal is identifying strictly relevant documents, that it could be useful to focus on last visited documents rather than all documents visited in a search session
    corecore