77,556 research outputs found

    Patterns of interactions: user behaviour in response to search results

    Get PDF
    This paper presents patterns of users’ interaction when working with digital libraries. It focuses on strategies developed and applied by users over time to achieve their goals. Results show that users choose different patterns of interaction depending on their evaluation of results, particularly in terms of the number of results returned from a search. This study gives indications about how the user interface could better support users in developing different search strategies

    From physical to digital: A case study of computer scientists' behaviour in physical libraries

    Get PDF
    There has been substantial research on various aspects of people's usage of physical libraries but relatively little on their interaction with individual library artefacts; that is: books, journals, and papers. We have studied people's behaviour when working in physical libraries, focusing particularly on how they interact with these artefacts, how they evaluate them, and how they interact with librarians. This study provides a better understanding of how people interact with paper information, from which we can draw implications for some requirements of the design of digital libraries, while recognising that the term 'library' is a metaphor when applied to electronic document collections. In particular, improved communication with other library users and with librarians could facilitate more rapid access to relevant information and support services, and structuring information presentation so that users can make rapid assessments of its relevance would improve the efficiency of many information searches. © Springer-Verlag 2004

    Usability evaluation of digital libraries: a tutorial

    Get PDF
    This one-day tutorial is an introduction to usability evaluation for Digital Libraries. In particular, we will introduce Claims Analysis. This approach focuses on the designers’ motivations and reasons for making particular design decisions and examines the effect on the user’s interaction with the system. The general approach, as presented by Carroll and Rosson(1992), has been tailored specifically to the design of digital libraries. Digital libraries are notoriously difficult to design well in terms of their eventual usability. In this tutorial, we will present an overview of usability issues and techniques for digital libraries, and a more detailed account of claims analysis, including two supporting techniques – simple cognitive analysis based on Norman’s ‘action cycle’ and Scenarios and personas. Through a graduated series of worked examples, participants will get hands-on experience of applying this approach to developing more usable digital libraries. This tutorial assumes no prior knowledge of usability evaluation, and is aimed at all those involved in the development and deployment of digital libraries

    Users' trust in information resources in the Web environment: a status report

    Get PDF
    This study has three aims; to provide an overview of the ways in which trust is either assessed or asserted in relation to the use and provision of resources in the Web environment for research and learning; to assess what solutions might be worth further investigation and whether establishing ways to assert trust in academic information resources could assist the development of information literacy; to help increase understanding of how perceptions of trust influence the behaviour of information users

    (E-book) Patron Driven Acquisitions (PDA): An Annotated Bibliography

    Get PDF
    Patron Driven Acquisitions (PDA), also known as Demand Driven Acquisitions (DDA) and Purchase on Demand (POD), has been used by libraries since the early 1990’s. PDA allows libraries to acquire items based on the immediate needs of their patrons, often without library intervention. With the arrival of e-books in the late 1990’s, libraries soon began including them in their PDA workflows. PDA is controversial for several reasons, and PDA of E-books adds further issues to the debate. This bibliography covers PDA and the issues academic libraries face when devising a PDA program. Articles outline the benefits and problems of print and E-book PDA and the debate they elicit. They also document the response of libraries to address these problems. Only peer-reviewed articles that express current thought on the subject (as of this writing) have been used

    (E-book) Patron Driven Acquisitions (PDA): An Annotated Bibliography

    Get PDF
    Patron Driven Acquisitions (PDA), also known as Demand Driven Acquisitions (DDA) and Purchase on Demand (POD), has been used by libraries since the early 1990’s. PDA allows libraries to acquire items based on the immediate needs of their patrons, often without library intervention. With the arrival of e-books in the late 1990’s, libraries soon began including them in their PDA workflows. PDA is controversial for several reasons, and PDA of E-books adds further issues to the debate. This bibliography covers PDA and the issues academic libraries face when devising a PDA program. Articles outline the benefits and problems of print and E-book PDA and the debate they elicit. They also document the response of libraries to address these problems. Only peer-reviewed articles that express current thought on the subject (as of this writing) have been used

    Human evaluation of Kea, an automatic keyphrasing system.

    Get PDF
    This paper describes an evaluation of the Kea automatic keyphrase extraction algorithm. Tools that automatically identify keyphrases are desirable because document keyphrases have numerous applications in digital library systems, but are costly and time consuming to manually assign. Keyphrase extraction algorithms are usually evaluated by comparison to author-specified keywords, but this methodology has several well-known shortcomings. The results presented in this paper are based on subjective evaluations of the quality and appropriateness of keyphrases by human assessors, and make a number of contributions. First, they validate previous evaluations of Kea that rely on author keywords. Second, they show Kea's performance is comparable to that of similar systems that have been evaluated by human assessors. Finally, they justify the use of author keyphrases as a performance metric by showing that authors generally choose good keywords
    corecore