24 research outputs found
Recommended from our members
SERF: integrating human recommendations with search
Today's university library has many digitally accessible resources, both indexes to content and considerable original content. Using off-the-shelf search technology provides a single point of access into library resources, but we have found that such full-text indexing technology is not entirely satisfactory for library searching.
In response to this, we report initial usage results from a prototype of an entirely new type of search engine - The System for Electronic Recommendation Filtering (SERF) - that we have designed and deployed for the Oregon State University (OSU) Libraries. SERF encourages users to enter longer and more informative queries, and collects ratings from users as to whether search results meet their information need or not. These ratings are used to make recommendations to later users with similar needs. Over time, SERF learns from the users what documents are valuable for what information needs.
In this paper, we focus on understanding whether such recommendations can increase other users' search efficiency and effectiveness in library website searching.
Based on examination of three months of usage as an alternative search interface available to all users of the Oregon State University Libraries website (http://osulibrary.oregonstate.edu/), we found strong evidence that the recommendations with human evaluation could increase the efficiency as well as effectiveness of the library website search process. Those users who received recommendations needed to examine fewer results, and recommended documents were rated much higher than documents returned by a traditional search engine
Position Statement | Explanations in Recommender Systems
Introduction Automated collaborative ltering (ACF) systems predict a person's anity for unexperienced items based on the past experiences of that person and the past and current experiences of a community of people. ACF systems have been successful in research, with projects such as GroupLens[7], Ringo[10], and Video Recommender[4] gaining large followings on the Internet. Commercially, some of the highest pro le web sites like Amazon.com, CDNow.com, and MovieFinder.com have made successful use of ACF technology. While automated collaborative ltering systems have proven to be generally accurate, their failure rates still remain unacceptable for certain domains or individuals. While a user may be willing to risk purchasing a music CD based on the recommendation of an ACF system, he will probably not risk choosing a honeymoon vacation spot based on such a recommendation. However, there is no reason why the higher- risk domains should not bene t from ACF technology. There are sev
General Terms
Collaborative Filtering (CF) systems have been researched for over a decade as a tool to deal with information overload. At the heart of these systems are the algorithms which generate the predictions and recommendations. In this article we empirically demonstrate that two of the most acclaimed CF recommendation algorithms have flaws that result in a dramatically unacceptable user experience. In response, we introduce a new Belief Distribution Algorithm that overcomes these flaws and provides substantially richer user modeling. The Belief Distribution Algorithm retains the qualities of nearest-neighbor algorithms which have performed well in the past, yet produces predictions of belief distributions across rating values rather than a point rating value. In addition, we illustrate how the exclusive use of the mean absolute error metric has concealed these flaws for so long, and we propose the use of a modified Precision metric for more accurately evaluating the user experience
Click data as implicit relevance feedback
Abstract Search sessions consist of a person presenting a query to a search engine, followed by that person examining the search results, selecting some of those search results for further review, possibly following some series of hyperlinks, and perhaps backtracking to previously viewed pages in the session. The series of pages selected for viewing in a search session, sometimes called the click data, is intuitively a source of relevance feedback information to the search engine. We are interested in how that relevance feedback can be used to improve the search results quality for all users, not just the current user. For example, the search engine could learn which documents are frequently visited when certain search queries are given. In this article, we address three issues related to using click data as implicit relevance feedback: (1) How click data beyond the search results page might be more reliable than just the clicks from the search results page; (2) Whether we can further subselect from this click data to get even more reliable relevance feedback; and (3) How the reliability of click data for relevance feedback changes when the goal becomes finding one document for the user that completely meets their information needs (if possible). We refer to these documents as the ones that are strictly relevant to the query. Our conclusions are based on empirical data from a live website with manual assessment of relevance. We found that considering all of the click data in a search session as relevance feedback has the potential to increase both precision and recall of the feedback data. We further found that, when the goal is identifying strictly relevant documents, that it could be useful to focus on last visited documents rather than all documents visited in a search session