10 research outputs found
Recommended from our members
Collaborative filtering for digital libraries
Can collaborative filtering be successfully applied to digital
libraries in a manner that improves the effectiveness of the
library? Collaborative filtering systems remove the limitation of
traditional content-based search interfaces by using individuals to
evaluate and recommend information. We introduce an approach
where a digital library user specifies their need in the form of a
question, and is provided with recommendations of documents
based on ratings by other users with similar questions. Using a
testbed of the Tsunami Digital Library, we found evidence that
suggests that collaborative filtering may decrease the number of
search queries while improving users' overall perception of the
system. We discuss the challenges of designing a collaborative
filtering system for digital libraries and then present our
preliminary experimental results.Keywords: user studies, digital libraries, tsunamis, natural hazards, Collaborative filterin
Recommended from our members
Browsing for information on the web and in the file system
Browsing is one of the methods used for finding and refinding information on the web or in the file local system and there are opportunities to avoid this, particularly if that information is revisited frequently. We present empirical results from a field study contrasting patterns of browsing to local and web information and we qualify the cost that this navigation method incurs. In addition, we provide an improved method for defining revisit behavior and report on the level of revisits during our study. Our findings have implications for solution development that reduce user effort for finding and refinding information.Keywords: navigation, finding, Information Search and Retrieval, Information Storage and Retrieval, Information interfaces and presentation, refinding, files, web pages, Browsin
Recommended from our members
Toward harnessing user feedback for machine learning
There has been little research into how end users might be able to communicate advice to machine learning systems. If this resource--the users themselves--could somehow work hand-in-hand with machine learning systems, the accuracy of learning systems could be improved and the users' understanding and trust of the system could improve as well. We conducted a think-aloud study to see how willing users were to provide feedback and to understand what kinds of feedback users could give. Users were shown explanations of machine learning predictions and asked to provide feedback to improve the predictions. We found that users had no difficulty providing generous amounts of feedback. The kinds of feedback ranged from suggestions for reweighting of features to proposals for new features, feature combinations, relational features, and wholesale changes to the learning algorithm. The results show that user feedback has the potential to significantly improve machine learning systems, but that learning algorithms need to be extended in several ways to be able to assimilate this feedback.Author Keywords:
Machine learning, explanations, user feedback for learnin
Recommended from our members
Recovery from interruptions : knowledge workers' strategies, failures and envisioned solutions
This paper presents qualitative results from interviews with knowledge workers about their recovery strategies after interruptions. Special focus is given to when these strategies fail due to the nature of the interruption and existing computer support. Potential solutions offered by participants to overcome some of these problems are presented. These findings will benefit researchers and designers in the area of task-centric applications, especially in the area of support for recovery from interruptions.Keywords: recovery, interruptions, interviews, multi-taskingKeywords: recovery, interruptions, interviews, multi-taskin
Recommended from our members
Getting to the information you already have
Knowledge workers need to find information but even when it is stored on their local computer systems, finding it can be costly. There are many researchers working on solutions to reduce these costs, but there has been little research into exactly what these costs are, and what the ties are be-tween these costs and users' choices between ways to access their local information. This paper provides a methodology for investigating such issues, and reports empirical results on ways of accessing local, task-relevant resources (e.g. document files), their associated costs, and users' sensitivities to certain kinds of costs. Our results fill in gaps in what has been known about the problem, thereby helping to inform research on solutions to the problem.Keywords: user costs, accessing resources, Information storage and retrieval, finding information, opening files, Information interfaces and presentation, User interface
Recommended from our members
Interacting meaningfully with machine learning systems : three experiments
Although machine learning is becoming commonly used in today's software, there has been little research into how end users might interact with machine learning systems, beyond communicating simple "right/wrong" judgments. If the users themselves could somehow work hand-in-hand with machine learning systems, the accuracy of learning systems could be improved and the users' understanding and trust of the system could improve as well. We conducted three experiments to begin to understand the potential for rich interactions between users and machine learning systems. The first experiment was a think-aloud study, aiming to see how willing users were to interact with and about machine learning reasoning, and to help us understand what kinds of feedback users might give to machine learning systems. Specifically, users were shown explanations of machine learning predictions and asked to provide feedback to improve the predictions. The results were that users' feedback was rich, complex, and widely varied, ranging from suggestions for reweighting of features to proposals for new features, feature combinations, relational features, and wholesale changes to the learning algorithm. We then investigated the viability of introducing such feedback into machine learning systems: specifically, how to incorporate some of these types of user feedback into machine learning systems, and impact on the accuracy of the system. Taken together, the results of our experiments show that supporting rich interactions between users and machine learning systems is feasible for both user and machine. This shows the potential of rich human-computer collaboration via on-the-spot interactions as a promising direction for machine learning systems to work more intelligently, hand-in-hand with the user
Understanding and Improving Automated . . .
Automated collaborative filtering (ACF) is a recent software technology that provides personalized recommendation and filtering independent of the type of content. In an ACF system, users indicate their preferences by rating their level of interest in items that the system presents. The ACF system uses the ratings information to match together users with similar interests (who are known as neighbors). Finally, the ACF system can predict a user’s rating for an unseen item by examining his neighbors ’ ratings for that item. This dissertation presents a broad set of results with the goal of improving the effectiveness and understanding of ACF systems. The results cover four specific challenges: understanding and standardizing evaluation of ACF systems, improving the accuracy of ACF systems, designing and utilizing effective explanations for ACF predictions, and improving ACF to support focused ephemeral recommendations. To address these challenges, a combination of offline analysis and user testing is used. All of the evaluation metrics that have been proposed for ACF are examine
Recommended from our members
Supporting knowledge workers in practice : how do they understand and use work units?
If basic assumptions about how knowledge workers conceptualize and use work units are wrong, then any solutions resting on those assumptions are unlikely to be successful since, instead of decreasing costs, they will lead to increasing them. This paper reports on how knowledge workers understand, use and switch between units of work. We furthermore identify where current computing support causes problems and discuss implications for designing intelligent user interfaces.Keywords: work units, knowledge worker