8 research outputs found

    A Framework for Managing Inter-Site Storage Area Networks using Grid Technologies

    Get PDF
    The NASA Goddard Space Flight Center and the University of Maryland Institute for Advanced Computer Studies are studying mechanisms for installing and managing Storage Area Networks (SANs) that span multiple independent collaborating institutions using Storage Area Network Routers (SAN Routers). We present a framework for managing inter-site distributed SANs that uses Grid Technologies to balance the competing needs to control local resources, share information, delegate administrative access, and manage the complex trust relationships between the participating sites

    Recovery of a Digital Image Collection Through the SDSC/UMD/NARA Prototype Persistent Archive

    Get PDF
    The San Diego Supercomputer Center (SDSC), the University of Maryland, and the National Archives and Records Administration (NARA) are collaborating on building a pilot persistent archive using and extending data grid and digital library technologies. The current prototype consists of node servers at SDSC, University of Maryland, and NARA, connected through the Storage Request Broker (SRB) data grid middleware, and currently holds several terabytes of NARA selected collections. In particular, a historically important image collection that was on the verge of becoming inaccessible was fully restored and ingested into our pilot system. In this report, we describe the methodology behind our approach to fully restore this image collection and the process used to ingest it into the prototype persistent archive. (UMIACS-TR-2003-105

    Qualitative data sharing and re-use for socio-environmental systems research: A synthesis of opportunities, challenges, resources and approaches

    Get PDF
    Researchers in many disciplines, both social and natural sciences, have a long history of collecting and analyzing qualitative data to answer questions that have many dimensions, to interpret other research findings, and to characterize processes that are not easily quantified. Qualitative data is increasingly being used in socio-environmental systems research and related interdisciplinary efforts to address complex sustainability challenges. There are many scientific, descriptive and material benefits to be gained from sharing and re-using qualitative data, some of which reflect the broader push toward open science, and some of which are unique to qualitative research traditions. However, although open data availability is increasingly becoming an expectation in many fields and methodological approaches that work on socio-environmental topics, there remain many challenges associated the sharing and re-use of qualitative data in particular. This white paper discusses opportunities, challenges, resources and approaches for qualitative data sharing and re-use for socio-environmental research. The content and findings of the paper are a synthesis and extension of discussions that began during a workshop funded by the National Socio-Environmental Synthesis Center (SESYNC) and held at the Center Feb. 28-March 2, 2017. The structure of the paper reflects the starting point for the workshop, which focused on opportunities, challenges and resources for qualitative data sharing, and presents as well the workshop outputs focused on developing a novel approach to qualitative data sharing considerations and creating recommendations for how a variety of actors can further support and facilitate qualitative data sharing and re-use. The white paper is organized into five sections to address the following objectives: (1) Define qualitative data and discuss the benefits of sharing it along with its role in socio-environmental synthesis; (2) Review the practical, epistemological, and ethical challenges regarding sharing such data; (3) Identify the landscape of resources available for sharing qualitative data including repositories and communities of practice (4) Develop a novel framework for identifying levels of processing and access to qualitative data; and (5) Suggest roles and responsibilities for key actors in the research ecosystem that can improve the longevity and use of qualitative data in the future.This work was supported by the National Socio-Environmental Synthesis Center (SESYNC) under funding received from the National Science Foundation DBI-1052875

    A Decade of Preservation: Paper - iPRES 2016 - Swiss National Library, Bern

    No full text
    This paper provides a historical look at the technical migrations of the Chronopolis digital preservation system over the last ten years. During that time span the service has undergone several software system migrations, moving from middleware-based systems to a suite of individual, finely scoped components which employ widely used and standardized technologies. These transitions have enabled the system to become not only less dependent on interpretation by middleware, but also easier to transfer to new storage components. Additionally, the need for specialized software knowledge is alleviated; any Linux systems administrator should be able to install, configure, and run the software services with minimal guidance. The benefits of moving to a microservices approach have been instrumental in ensuring the longevity of the system through staff and organizational changes

    PAWN: Producer – Archive Workflow Network in Support of Digital Preservation

    No full text
    We describe the design and the implementation of the PAWN (Producer – Archive Workflow Network) environment to enable secure and distributed ingestion of digital objects into a persistent archive. PAWN was developed to capture the core elements required for long term preservation of digital objects as identified by previous research in the digital library and archiving communities. In fact, PAWN can be viewed as an implementation of the Ingest Process as defined by the Open Archival Information System (OAIS) Reference Model, and is currently being used to ingest significant collections into a pilot persistent archive developed through a collaboration between the San Diego Supercomputer Center, the University of Maryland, and the National Archives and Records Administration. We make use of METS (Metadata Encoding and Transmission Standards) to encapsulate content, structural, descriptive, and preservation metadata. The basic software components are based on open standards and web technologies, and hence are platform independent. 1
    corecore