120,970 research outputs found

    Facilitating Wiki/Repository Communication with Metadata

    Get PDF
    4th International Conference on Open RepositoriesThis presentation was part of the session : Fedora User Group PresentationsDate: 2009-05-20 01:30 PM – 03:00 PMThe National Science Digital Library (NSDL) Materials Digital Library Pathway (MatDL) has implemented an information infrastructure to disseminate government funded research results and to provide content as well as services to support the integration of research and education in materials. This paper describes how we are enabling two-way communication between a digital repository and open-source collaborative tools, such as wikis, to support users in materials research and education in the creation and re-use of compelling learning resources. A search results plug-in for MediaWiki has been developed to display relevant search results from the Fedora-based MatDL repository in the Soft Matter Wiki established and developed by MatDL and its partners. Wiki-to-repository information transfer has also been facilitated by mapping the metadata associated with resources originating in the wiki onto Dublin Core (DC) metadata elements and making the metadata and resources available in the repository.The Materials Digital Library Pathway (DUE-0532831) is supported by the National Science Foundation

    Descriptive metadata for the CAVA repository

    Get PDF
    This document describes the descriptive metadata schema devised for the CAVA (human Communication: an Audio-Visual Archive) Project

    Experiences in deploying metadata analysis tools for institutional repositories

    Get PDF
    Current institutional repository software provides few tools to help metadata librarians understand and analyze their collections. In this article, we compare and contrast metadata analysis tools that were developed simultaneously, but independently, at two New Zealand institutions during a period of national investment in research repositories: the Metadata Analysis Tool (MAT) at The University of Waikato, and the Kiwi Research Information Service (KRIS) at the National Library of New Zealand. The tools have many similarities: they are convenient, online, on-demand services that harvest metadata using OAI-PMH; they were developed in response to feedback from repository administrators; and they both help pinpoint specific metadata errors as well as generating summary statistics. They also have significant differences: one is a dedicated tool wheres the other is part of a wider access tool; one gives a holistic view of the metadata whereas the other looks for specific problems; one seeks patterns in the data values whereas the other checks that those values conform to metadata standards. Both tools work in a complementary manner to existing Web-based administration tools. We have observed that discovery and correction of metadata errors can be quickly achieved by switching Web browser views from the analysis tool to the repository interface, and back. We summarize the findings from both tools' deployment into a checklist of requirements for metadata analysis tools

    Experiences in deploying metadata analysis tools for institutional repositories

    Get PDF
    Current institutional repository software provides few tools to help metadata librarians understand and analyze their collections. In this article, we compare and contrast metadata analysis tools that were developed simultaneously, but independently, at two New Zealand institutions during a period of national investment in research repositories: the Metadata Analysis Tool (MAT) at The University of Waikato, and the Kiwi Research Information Service (KRIS) at the National Library of New Zealand. The tools have many similarities: they are convenient, online, on-demand services that harvest metadata using OAI-PMH; they were developed in response to feedback from repository administrators; and they both help pinpoint specific metadata errors as well as generating summary statistics. They also have significant differences: one is a dedicated tool wheres the other is part of a wider access tool; one gives a holistic view of the metadata whereas the other looks for specific problems; one seeks patterns in the data values whereas the other checks that those values conform to metadata standards. Both tools work in a complementary manner to existing Web-based administration tools. We have observed that discovery and correction of metadata errors can be quickly achieved by switching Web browser views from the analysis tool to the repository interface, and back. We summarize the findings from both tools' deployment into a checklist of requirements for metadata analysis tools

    Automatic Metadata Generation using Associative Networks

    Full text link
    In spite of its tremendous value, metadata is generally sparse and incomplete, thereby hampering the effectiveness of digital information services. Many of the existing mechanisms for the automated creation of metadata rely primarily on content analysis which can be costly and inefficient. The automatic metadata generation system proposed in this article leverages resource relationships generated from existing metadata as a medium for propagation from metadata-rich to metadata-poor resources. Because of its independence from content analysis, it can be applied to a wide variety of resource media types and is shown to be computationally inexpensive. The proposed method operates through two distinct phases. Occurrence and co-occurrence algorithms first generate an associative network of repository resources leveraging existing repository metadata. Second, using the associative network as a substrate, metadata associated with metadata-rich resources is propagated to metadata-poor resources by means of a discrete-form spreading activation algorithm. This article discusses the general framework for building associative networks, an algorithm for disseminating metadata through such networks, and the results of an experiment and validation of the proposed method using a standard bibliographic dataset

    STARGATE : Static Repository Gateway and Toolkit. Final Project Report

    Get PDF
    STARGATE (Static Repository Gateway and Toolkit) was funded by the Joint Information Systems Committee (JISC) and is intended to demonstrate the ease of use of the Open Archives Initiative Protocol for Metadata Harvesting (OAI-PMH) Static Repository technology, and the potential benefits offered to publishers in making their metadata available in this way This technology offers a simpler method of participating in many information discovery services than creating fully-fledged OAI-compliant repositories. It does this by allowing the infrastructure and technical support required to participate in OAI-based services to be shifted from the data provider (the journal) to a third party and allows a single third party gateway provider to provide intermediation for many data providers (journals). Specifically, STARGATE has created a series of Static Repositories of publisher metadata provided by a selection of Library and Information Science journals. It has demonstrated the interoperability of these repositories by exposing their metadata via a Static Repository Gateway for harvesting and cross-searching by external service providers. The project has conducted a critical evaluation of the Static Repository approach in conjunction with the participating publishers and service providers. The technology works. The project has demonstrated that Static Repositories are easy to create and that the differences between fully-fledged and static OAI Repositories have no impact on the participation of small journal publishers in OAI-based services. The problems for a service that arise out of the use of Static Repositories are parallel to those created by any other repository dealing with journal articles. Problems arise from the diversity of metadata element sets provided by a given journal and the lack of specific metadata elements for the articles' volume and issue details. Another issue for the use of publishers' metadata arise as the collection policies of some existing services only allow Open Access materials to be included in them. The project recommends that the use of Static Repositories continues to be explored - in particular as a flexible way to expose existing sets of structured information to OAI services and to create the opportunity to enhance the metadata as part of the process. The project further recommends that the publishing community consider the creation or adoption of an application profile for journal articles to support information discovery that can search by volume and issue. Significant further use of the Static Repository technology by small journal publishers will require the future creation and maintenance of a community-specific Static Repository Gateway. Further use will also require advocacy within the publishing community but might initially be most effectively kick-started through the creation of OAI repositories based on metadata held by the commercial services which publish or mediate access to electronic copies of journals on behalf of small publishers

    Workflows, processes and technical solutions for seeding the research data commons

    Get PDF
    Queensland University of Technology (QUT) completed an Australian National Data Service (ANDS) funded “Seeding the Commons Project” to contribute metadata to Research Data Australia. The project employed two Research Data Librarians from October 2009 through to July 2010. Technical support for the project was provided by QUT’s High Performance Computing and Research Support Specialists. ---------- The project identified and described QUT’s category 1 (ARC / NHMRC) research datasets. Metadata for the research datasets was stored in QUT’s Research Data Repository (Architecta Mediaflux). Metadata which was suitable for inclusion in Research Data Australia was made available to the Australian Research Data Commons (ARDC) in RIF-CS format. ---------- Several workflows and processes were developed during the project. 195 data interviews took place in connection with 424 separate research activities which resulted in the identification of 492 datasets. ---------- The project had a high level of technical support from QUT High Performance Computing and Research Support Specialists who developed the Research Data Librarian interface to the data repository that enabled manual entry of interview data and dataset metadata, creation of relationships between repository objects. The Research Data Librarians mapped the QUT metadata repository fields to RIF-CS and an application was created by the HPC and Research Support Specialists to generate RIF-CS files for harvest by the Australian Research Data Commons (ARDC). ---------- This poster will focus on the workflows and processes established for the project including: ---------- • Interview processes and instruments • Data Ingest from existing systems (including mapping to RIF-CS) • Data entry and the Data Librarian interface to Mediaflux • Verification processes • Mapping and creation of RIF-CS for the ARD

    Fedora Commons 3.0 Versus DSpace 1.5 : Selecting an Enterprise-Grade Repository System for FAO of the United Nations

    Get PDF
    4th International Conference on Open RepositoriesThis presentation was part of the session : Conference PostersAn extensive evaluation of the Fedora Commons 3.0 and DSpace 1.5 digital document repository systems has been conducted. The evaluation aimed at selecting an open source software package that best satisfies the FAO Open Archive and FAO organizational requirements and the requirements for the storage, dissemination and preservation of documents and bibliographic metadata. Both repository systems were evaluated against thirty-two criteria chosen from nine core categories of requirements: community, security, functionality, integration, modularity, metadata, statistics and reports, preservation, and outputs. These criteria were selected with the merger of the FAODOC and FAO Corporate Document Repository (CDR) into the FAO Open Archive in mind.Food and Agriculture Organization of the United Nation

    JISC Final Report: IncReASe (Increasing Repository Content through Automation and Services)

    Get PDF
    The IncReASe (Increasing Repository Content through Automation and Services) was an eighteen month project (subsequently extended to twenty months) to enhance White Rose Research Online (WRRO)1. WRRO is a shared repository of research outputs (primarily publications) from the Universities of Leeds, Sheffield and York; it runs on the EPrints open source repository platform. The repository was created in 2004 and had steady growth but, in common with many other similar repositories, had difficulty in achieving a “critical mass” of content and in becoming truly embedded within researchers’ workflows. The main aim of the IncReASe project was to assess ingestion routes into WRRO with a view to lowering barriers to deposit. We reviewed the feasibility of bulk import of pre-existing metadata and/or full-text research outputs, hoping this activity would have a positive knock-on effect on repository growth and embedding. Prior to the project, we had identified researchers’ reluctance to duplicate effort in metadata creation as a significant barrier to WRRO uptake; we investigated how WRRO might share data with internal and external IT systems. This work included a review of how WRRO, as an institutional based repository, might interact with the subject repository of the Economic and Social Research Council (ESRC). The project addressed four main areas: (i) researcher behaviour: we investigated researcher awareness, motivation and workflow through a survey of archiving activity on the university web sites, a questionnaire and discussions with researchers (ii) bulk import: we imported data from local systems, including York’s submission data for the 2008 Research Assessment Exercise (RAE), and developed an import plug-in for use with the arXiv2 repository (iii) interoperability: we looked at how WRRO might interact with university and departmental publication databases and ESRC’s repository. (iv) metadata: we assessed metadata issues raised by importing publication data from a variety of sources. A number of outputs from the project have been made available from the IncReASe project web site http://eprints.whiterose.ac.uk/increase/. The project highlighted the low levels of researcher awareness of WRRO - and of broader open access issues, including research funders’ deposit requirements. We designed some new publicity materials to start to address this. Departmental publication databases provided a useful jumping off point for advocacy and liaison; this activity was helpful in promoting awareness of WRRO. Bulk import proved time consuming – both in terms of adjusting EPrints plug-ins to incorporate different datasets and in the staff time required to improve publication metadata. A number of deposit scenarios were developed in the context of our work with ESRC; we concentrated on investigating how a local deposit of a research paper and attendant metadata in WRRO might be used to populate ESRC’s repository. This work improved our understanding of researcher workflows and of the SWORD protocol as a potential (if partial) solution to the single deposit, multiple destination model we wish to develop; we think the prospect of institutional repository / ESRC data sharing is now a step closer. IncReASe experienced some staff recruitment difficulties. It was also necessary to adapt the project to the changing IT landscape at the three partner institutions – in particular, the introduction of a centralised publication management system at the University of Leeds. Although these factors had some impact on deliverables, the aims and objectives of the project were largely achieved
    corecore