5 research outputs found
A library or just another information resource? A case study of users' mental models of traditional and digital libraries
A user's understanding of the libraries they work in, and hence of what they can do in those libraries, is encapsulated in their âmental modelsâ of those libraries. In this article, we present a focused case study of users' mental models of traditional and digital libraries based on observations and interviews with eight participants. It was found that a poor understanding of access restrictions led to risk-averse behavior, whereas a poor understanding of search algorithms and relevance ranking resulted in trial-and-error behavior. This highlights the importance of rich feedback in helping users to construct useful mental models. Although the use of concrete analogies for digital libraries was not widespread, participants used their knowledge of Internet search engines to infer how searching might work in digital libraries. Indeed, most participants did not clearly distinguish between different kinds of digital resource, viewing the electronic library catalogue, abstracting services, digital libraries, and Internet search engines as variants on a theme
The PRET A Rapporter framework: evaluating digital libraries from the perspective of information work
The strongest tradition of IR systems evaluation has focused on system eïŹectiveness; more recently, there has been a
growing interest in evaluation of Interactive IR systems, balancing system and user-oriented evaluation criteria. In this paper we shift the focus to considering how IR systems, and particularly digital libraries, can be evaluated to assess (and improve) their ïŹt with usersâ broader work activities. Taking this focus, we answer a diïŹerent set of evaluation questions that reveal more about the design of interfaces, userâsystem interactions and how systems may be deployed in the information working context. The planning and conduct of such evaluation studies share some features with the established methods for conducting IR evaluation studies, but come with a shift in emphasis; for example, a greater range of ethical considerations may be pertinent. We present the PRET A Rapporter framework for structuring user-centred evaluation studies and illustrate its application to three evaluation studies of digital library systems
The PRET A Rapporter framework: Evaluating digital libraries from the perspective of information work
The strongest tradition of IR systems evaluation has focused on system effectiveness; more recently, there has been a growing interest in evaluation of Interactive IR systems, balancing system and user-oriented evaluation criteria. In this paper we shift the focus to considering how IR systems, and particularly digital libraries, can be evaluated to assess (and improve) their fit with usersâ broader work activities. Taking this focus, we answer a different set of evaluation questions that reveal more about the design of interfaces, userâsystem interactions and how systems may be deployed in the information working context. The planning and conduct of such evaluation studies share some features with the established methods for conducting IR evaluation studies, but come with a shift in emphasis; for example, a greater range of ethical considerations may be pertinent. We present the PRET A Rapporter framework for structuring user-centred evaluation studies and illustrate its application to three evaluation studies of digital library systems
A Snapshot of Reading, Searching, and Browsing Preferences of Tertiary Students
Unanswered questions remain regarding how to design search result pages in library catalogues that offer effective library seeking experiences for users â especially those designed for small screen mobile devices. This paper reports a snapshot interview of the user habits and preferences of tertiary library patrons during book searching and browsing and provides recommendations for library catalogue design and further research