3,999 research outputs found

    BlogForever D3.2: Interoperability Prospects

    Get PDF
    This report evaluates the interoperability prospects of the BlogForever platform. Therefore, existing interoperability models are reviewed, a Delphi study to identify crucial aspects for the interoperability of web archives and digital libraries is conducted, technical interoperability standards and protocols are reviewed regarding their relevance for BlogForever, a simple approach to consider interoperability in specific usage scenarios is proposed, and a tangible approach to develop a succession plan that would allow a reliable transfer of content from the current digital archive to other digital repositories is presented

    A tool for metadata analysis

    Get PDF
    We describe a Web-based metadata quality tool that provides statistical descriptions and visualisations of Dublin Core metadata harvested via the OAI protocol. The lightweight nature of development allows it to be used to gather contextualized requirements and some initial user feedback is discussed

    Dublin Core Metadata Harvested Through OAI-PMH

    Get PDF
    The introduction in 2001 of the Open Archives Initiative Protocol for Metadata Harvesting (OAI-PMH) increased interest in and awareness of metadata quality issues relevant to digital library interoperability and the use of harvested metadata to build "union catalogs" of digital information resources. Practitioners have offered wide-ranging advice to metadata authors and have suggested metrics useful for measuring the quality of shareable metadata. Is there evidence of changes in metadata practice in response to such advice and/or as a result of an increased awareness of the importance of metadata interoperability? This paper looks at metadata records created over a six-year period that have been harvested by the University of Illinois at Urbana Champaign, and reports on quantitative and qualitative analyses of changes observed over time in shareable metadata quality.IMLS National Leadership Grant LG-02-02-0281published or submitted for publicationis peer reviewe

    Servicing the federation : the case for metadata harvesting

    Get PDF
    The paper presents a comparative analysis of data harvesting and distributed computing as complementary models of service delivery within large-scale federated digital libraries. Informed by requirements of flexibility and scalability of federated services, the analysis focuses on the identification and assessment of model invariants. In particular, it abstracts over application domains, services, and protocol implementations. The analytical evidence produced shows that the harvesting model offers stronger guarantees of satisfying the identified requirements. In addition, it suggests a first characterisation of services based on their suitability to either model and thus indicates how they could be integrated in the context of a single federated digital library

    Assessing Descriptive Substance in Free-Text Collection-Level Metadata

    Get PDF
    Collection-level metadata has the potential to provide important information about the features and purpose of individual collections. This paper reports on a content analysis of collection records in an aggregation of cultural heritage collections. The findings show that the free-text Description field often provides more accurate and complete representation of subjects and object types than the specified fields. Properties such as importance, uniqueness, comprehensiveness, provenance, and creator are articulated, as well as other vital contextual information about the intentions of a collector and the value of a collection, as a whole, for scholarly users. The results demonstrate that the semantically rich free-text Description field is essential to understanding the context of collections in large aggregations and can serve as a source of data for enhancing and customizing controlled vocabulariesIMLS NLG Research and Demonstration grant LG-06-07-0020-07published or submitted for publicationis peer reviewe

    A Generic Alerting Service for Digital Libraries

    Get PDF
    Users of modern digital libraries (DLs) can keep themselves up-to-date by searching and browsing their favorite collections, or more conveniently by resorting to an alerting service. The alerting service notifies its clients about new or changed documents. Proprietary and mediating alerting services fail to fluidly integrate information from differing collections. This paper analyses the conceptual requirements of this much-sought after service for digital libraries. We demonstrate that the differing concepts of digital libraries and its underlying technical design has extensive influence (a) the expectations, needs and interests of users regarding an alerting service, and (b) on the technical possibilities of the implementation of the service. Our findings will show that the range of issues surrounding alerting services for digital libraries, their design and use is greater than one may anticipate. We also show that, conversely, the requirements for an alerting service have considerable impact on the concepts of DL design. Our findings should be of interest for librarians as well as system designers. We highlight and discuss the far-reaching implications for the design of, and interaction with, libraries. This paper discusses the lessons learned from building such a distributed alerting service. We present our prototype implementation as a proof-of-concept for an alerting service for open DL software

    Discovery layers and discovery services

    Get PDF

    Simplifying resource discovery and access in academic libraries : implementing and evaluating Summon at Huddersfield and Northumbria Universities

    Get PDF
    Facilitating information discovery and maximising value for money from library materials is a key driver for academic libraries, which spend substantial sums of money on journal, database and book purchasing. Users are confused by the complexity of our collections and the multiple platforms to access them and are reluctant to spend time learning about individual resources and how to use them - comparing this unfavourably to popular and intuitive search engines like Google. As a consequence the library may be seen as too complicated and time consuming and many of our most valuable resources remain undiscovered and underused. Federated search tools were the first commercial products to address this problem. They work by using a single search box to interrogate multiple databases (including Library catalogues) and journal platforms. While going some way to address the problem, many users complained that they were still relatively slow, clunky and complicated to use compared to Google or Google Scholar. The emergence of web-scale discovery services in 2009 promised to deal with some of these problems. By harvesting and indexing metadata direct from publishers and local library collections into a single index they facilitate resource discovery and access to multiple library collections (whether in print or electronic form) via a single search box. Users no longer have to negotiate a number of separate platforms to find different types of information and because the data is held in a single unified index searching is fast and easy. In 2009 both Huddersfield and Northumbria Universities purchased Serials Solutions Summon. This case study report describes the selection, implementation and testing of Summon at both Universities drawing out common themes as well as differences; there are suggestions for those who intend to implement Summon in the future and some suggestions for future development

    A Taxonomy of Data Grids for Distributed Data Sharing, Management and Processing

    Full text link
    Data Grids have been adopted as the platform for scientific communities that need to share, access, transport, process and manage large data collections distributed worldwide. They combine high-end computing technologies with high-performance networking and wide-area storage management techniques. In this paper, we discuss the key concepts behind Data Grids and compare them with other data sharing and distribution paradigms such as content delivery networks, peer-to-peer networks and distributed databases. We then provide comprehensive taxonomies that cover various aspects of architecture, data transportation, data replication and resource allocation and scheduling. Finally, we map the proposed taxonomy to various Data Grid systems not only to validate the taxonomy but also to identify areas for future exploration. Through this taxonomy, we aim to categorise existing systems to better understand their goals and their methodology. This would help evaluate their applicability for solving similar problems. This taxonomy also provides a "gap analysis" of this area through which researchers can potentially identify new issues for investigation. Finally, we hope that the proposed taxonomy and mapping also helps to provide an easy way for new practitioners to understand this complex area of research.Comment: 46 pages, 16 figures, Technical Repor

    Search Interoperability, OAI, and Metadata: Handout for METRO Workshop

    Get PDF
    Handout for the workshop on the OAI Protocol for Metadata Harvesting given for METRO on December 8, 2006
    corecore