20,658 research outputs found

    A Multi-Relational Network to Support the Scholarly Communication Process

    Full text link
    The general pupose of the scholarly communication process is to support the creation and dissemination of ideas within the scientific community. At a finer granularity, there exists multiple stages which, when confronted by a member of the community, have different requirements and therefore different solutions. In order to take a researcher's idea from an initial inspiration to a community resource, the scholarly communication infrastructure may be required to 1) provide a scientist initial seed ideas; 2) form a team of well suited collaborators; 3) located the most appropriate venue to publish the formalized idea; 4) determine the most appropriate peers to review the manuscript; and 5) disseminate the end product to the most interested members of the community. Through the various delinieations of this process, the requirements of each stage are tied soley to the multi-functional resources of the community: its researchers, its journals, and its manuscritps. It is within the collection of these resources and their inherent relationships that the solutions to scholarly communication are to be found. This paper describes an associative network composed of multiple scholarly artifacts that can be used as a medium for supporting the scholarly communication process.Comment: keywords: digital libraries and scholarly communicatio

    Comparison of full-text versus metadata searching in an institutional repository: Case study of the UNT Scholarly Works

    Get PDF
    Authors in the library science field disagree about the importance of using costly resources to create local metadata records, particularly for scholarly materials that have full-text search alternatives. At the University of North Texas (UNT) Libraries, we decided to test this concept by answering the question: What percentage of search terms retrieved results based on full-text versus metadata values for items in the UNT Scholarly Works institutional repository? The analysis matched search query logs to indexes of the metadata records and full text of the items in the collection. Results show the distribution of item discoveries that were based on metadata exclusively, on full text exclusively, and on the combination of both. This paper describes in detail the methods and findings of this study

    ARK as a Bridge Between Digital Access and Preservation

    Get PDF
    Poster presented at the 2014 Utah Library Association (ULA) annual conference held April 30-May 2, 2014 in Sandy, Utah.In line with the Marriott Library's mission to present digital collections online as well as preserve them long term, a multi-dimensional team collaborated to create an information and metadata packaging and submission tool (SIMP) for the purpose of ingesting the Library's digital collections into its new digital preservation system (ExLibris Rosetta), while simultaneously maintaining its longstanding digital asset management system (CONTENTdm). The parallel use of two disparate systems for access and preservation posed a unique challenge to the team - what exactly should be ingested into each system and how? The full metadata record could not be placed into each as any alteration to the metadata in one would not sync across systems and invariably result in multiple outdated versions. In the response to the question, the team designed the SIMP tool to assign an Archival Resource Key (ARK) to both the access and preservation copies in order to identify and locate matching content across systems. This poster visually represents the collaborative efforts of various Library divisions in developing an adaptable tool for the packaging of files, creation of metadata, and ingest of data into different information systems, always with the ARK as a bridge between digital access and preservation.J. Willard Marriott Library, University of Uta

    The LIFE Model v1.1

    Get PDF
    Extract: This document draws together feedback, discussion and review of the LIFE Model from a number of sources: 1. The LIFE and LIFE2 Project Teams, and the staff of their institutions 2. Feedback from review by independent economics expert 3. The LIFE Project Conference 4. Early adopters of the Life Model (particularly the Royal Danish Library, State Archives and the State and University Library, Denmark) The result is a revision of the LIFE Model which was first published in 2006 by the LIFE Project . In line with the objectives of the LIFE2 Project, this revision aims to: 1. fix outstanding anomalies or omissions in the Model 2. scope and define the Model and its components more precisely 3. facilitate useful and repeatable mapping and costing of digital lifecycles

    Optimising metadata to make high-value content more accessible to Google users

    Get PDF
    Purpose: This paper shows how information in digital collections that have been catalogued using high-quality metadata can be retrieved more easily by users of search engines such as Google. Methodology/approach: The research and proposals described arose from an investigation into the observed phenomenon that pages from the Glasgow Digital Library (gdl.cdlr.strath.ac.uk) were regularly appearing near the top of Google search results shortly after publication, without any deliberate effort to achieve this. The reasons for this phenomenon are now well understood and are described in the second part of the paper. The first part provides context with a review of the impact of Google and a summary of recent initiatives by commercial publishers to make their content more visible to search engines. Findings/practical implications: The literature research provides firm evidence of a trend amongst publishers to ensure that their online content is indexed by Google, in recognition of its popularity with Internet users. The practical research demonstrates how search engine accessibility can be compatible with use of established collection management principles and high-quality metadata. Originality/value: The concept of data shoogling is introduced, involving some simple techniques for metadata optimisation. Details of its practical application are given, to illustrate how those working in academic, cultural and public-sector organisations could make their digital collections more easily accessible via search engines, without compromising any existing standards and practices

    Experiences in deploying metadata analysis tools for institutional repositories

    Get PDF
    Current institutional repository software provides few tools to help metadata librarians understand and analyze their collections. In this article, we compare and contrast metadata analysis tools that were developed simultaneously, but independently, at two New Zealand institutions during a period of national investment in research repositories: the Metadata Analysis Tool (MAT) at The University of Waikato, and the Kiwi Research Information Service (KRIS) at the National Library of New Zealand. The tools have many similarities: they are convenient, online, on-demand services that harvest metadata using OAI-PMH; they were developed in response to feedback from repository administrators; and they both help pinpoint specific metadata errors as well as generating summary statistics. They also have significant differences: one is a dedicated tool wheres the other is part of a wider access tool; one gives a holistic view of the metadata whereas the other looks for specific problems; one seeks patterns in the data values whereas the other checks that those values conform to metadata standards. Both tools work in a complementary manner to existing Web-based administration tools. We have observed that discovery and correction of metadata errors can be quickly achieved by switching Web browser views from the analysis tool to the repository interface, and back. We summarize the findings from both tools' deployment into a checklist of requirements for metadata analysis tools

    Experiences in deploying metadata analysis tools for institutional repositories

    Get PDF
    Current institutional repository software provides few tools to help metadata librarians understand and analyze their collections. In this article, we compare and contrast metadata analysis tools that were developed simultaneously, but independently, at two New Zealand institutions during a period of national investment in research repositories: the Metadata Analysis Tool (MAT) at The University of Waikato, and the Kiwi Research Information Service (KRIS) at the National Library of New Zealand. The tools have many similarities: they are convenient, online, on-demand services that harvest metadata using OAI-PMH; they were developed in response to feedback from repository administrators; and they both help pinpoint specific metadata errors as well as generating summary statistics. They also have significant differences: one is a dedicated tool wheres the other is part of a wider access tool; one gives a holistic view of the metadata whereas the other looks for specific problems; one seeks patterns in the data values whereas the other checks that those values conform to metadata standards. Both tools work in a complementary manner to existing Web-based administration tools. We have observed that discovery and correction of metadata errors can be quickly achieved by switching Web browser views from the analysis tool to the repository interface, and back. We summarize the findings from both tools' deployment into a checklist of requirements for metadata analysis tools

    Reflectance Transformation Imaging (RTI) System for Ancient Documentary Artefacts

    No full text
    This tutorial summarises our uses of reflectance transformation imaging in archaeological contexts. It introduces the UK AHRC funded project reflectance Transformation Imaging for Anciant Documentary Artefacts and demonstrates imaging methodologies

    BioGUID: resolving, discovering, and minting identifiers for biodiversity informatics

    Get PDF
    Background: Linking together the data of interest to biodiversity researchers (including specimen records, images, taxonomic names, and DNA sequences) requires services that can mint, resolve, and discover globally unique identifiers (including, but not limited to, DOIs, HTTP URIs, and LSIDs). Results: BioGUID implements a range of services, the core ones being an OpenURL resolver for bibliographic resources, and a LSID resolver. The LSID resolver supports Linked Data-friendly resolution using HTTP 303 redirects and content negotiation. Additional services include journal ISSN look-up, author name matching, and a tool to monitor the status of biodiversity data providers. Conclusion: BioGUID is available at http://bioguid.info/. Source code is available from http://code.google.com/p/bioguid/
    corecore