5 research outputs found

    A library or just another information resource? A case study of users' mental models of traditional and digital libraries

    Get PDF
    A user's understanding of the libraries they work in, and hence of what they can do in those libraries, is encapsulated in their “mental models” of those libraries. In this article, we present a focused case study of users' mental models of traditional and digital libraries based on observations and interviews with eight participants. It was found that a poor understanding of access restrictions led to risk-averse behavior, whereas a poor understanding of search algorithms and relevance ranking resulted in trial-and-error behavior. This highlights the importance of rich feedback in helping users to construct useful mental models. Although the use of concrete analogies for digital libraries was not widespread, participants used their knowledge of Internet search engines to infer how searching might work in digital libraries. Indeed, most participants did not clearly distinguish between different kinds of digital resource, viewing the electronic library catalogue, abstracting services, digital libraries, and Internet search engines as variants on a theme

    The PRET A Rapporter framework: evaluating digital libraries from the perspective of information work

    No full text
    The strongest tradition of IR systems evaluation has focused on system eïŹ€ectiveness; more recently, there has been a growing interest in evaluation of Interactive IR systems, balancing system and user-oriented evaluation criteria. In this paper we shift the focus to considering how IR systems, and particularly digital libraries, can be evaluated to assess (and improve) their ïŹt with users’ broader work activities. Taking this focus, we answer a diïŹ€erent set of evaluation questions that reveal more about the design of interfaces, user–system interactions and how systems may be deployed in the information working context. The planning and conduct of such evaluation studies share some features with the established methods for conducting IR evaluation studies, but come with a shift in emphasis; for example, a greater range of ethical considerations may be pertinent. We present the PRET A Rapporter framework for structuring user-centred evaluation studies and illustrate its application to three evaluation studies of digital library systems

    The PRET A Rapporter framework: Evaluating digital libraries from the perspective of information work

    Get PDF
    The strongest tradition of IR systems evaluation has focused on system effectiveness; more recently, there has been a growing interest in evaluation of Interactive IR systems, balancing system and user-oriented evaluation criteria. In this paper we shift the focus to considering how IR systems, and particularly digital libraries, can be evaluated to assess (and improve) their fit with users’ broader work activities. Taking this focus, we answer a different set of evaluation questions that reveal more about the design of interfaces, user–system interactions and how systems may be deployed in the information working context. The planning and conduct of such evaluation studies share some features with the established methods for conducting IR evaluation studies, but come with a shift in emphasis; for example, a greater range of ethical considerations may be pertinent. We present the PRET A Rapporter framework for structuring user-centred evaluation studies and illustrate its application to three evaluation studies of digital library systems

    References

    No full text
    corecore