580 research outputs found

    D3.2 Cost Concept Model and Gateway Specification

    Get PDF
    This document introduces a Framework supporting the implementation of a cost concept model against which current and future cost models for curating digital assets can be benchmarked. The value built into this cost concept model leverages the comprehensive engagement by the 4C project with various user communities and builds upon our understanding of the requirements, drivers, obstacles and objectives that various stakeholder groups have relating to digital curation. Ultimately, this concept model should provide a critical input to the development and refinement of cost models as well as helping to ensure that the curation and preservation solutions and services that will inevitably arise from the commercial sector as ā€˜supplyā€™ respond to a much better understood ā€˜demandā€™ for cost-effective and relevant tools. To meet acknowledged gaps in current provision, a nested model of curation which addresses both costs and benefits is provided. The goal of this task was not to create a single, functionally implementable cost modelling application; but rather to design a model based on common concepts and to develop a generic gateway specification that can be used by future model developers, service and solution providers, and by researchers in follow-up research and development projects.<p></p> The Framework includes:<p></p> ā€¢ A Cost Concept Modelā€”which defines the core concepts that should be included in curation costs models;<p></p> ā€¢ An Implementation Guideā€”for the cost concept model that provides guidance and proposes questions that should be considered when developing new cost models and refining existing cost models;<p></p> ā€¢ A Gateway Specification Templateā€”which provides standard metadata for each of the core cost concepts and is intended for use by future model developers, model users, and service and solution providers to promote interoperability;<p></p> ā€¢ A Nested Model for Digital Curationā€”that visualises the core concepts, demonstrates how they interact and places them into context visually by linking them to A Cost and Benefit Model for Curation.<p></p> This Framework provides guidance for data collection and associated calculations in an operational context but will also provide a critical foundation for more strategic thinking around curation such as the Economic Sustainability Reference Model (ESRM).<p></p> Where appropriate, definitions of terms are provided, recommendations are made, and examples from existing models are used to illustrate the principles of the framework

    Digital Preservation Services : State of the Art Analysis

    Get PDF
    Research report funded by the DC-NET project.An overview of the state of the art in service provision for digital preservation and curation. Its focus is on the areas where bridging the gaps is needed between e-Infrastructures and efficient and forward-looking digital preservation services. Based on a desktop study and a rapid analysis of some 190 currently available tools and services for digital preservation, the deliverable provides a high-level view on the range of instruments currently on offer to support various functions within a preservation system.European Commission, FP7peer-reviewe

    Curating E-Mails; A life-cycle approach to the management and preservation of e-mail messages

    Get PDF
    E-mail forms the backbone of communications in many modern institutions and organisations and is a valuable type of organisational, cultural, and historical record. Successful management and preservation of valuable e-mail messages and collections is therefore vital if organisational accountability is to be achieved and historical or cultural memory retained for the future. This requires attention by all stakeholders across the entire life-cycle of the e-mail records. This instalment of the Digital Curation Manual reports on the several issues involved in managing and curating e-mail messages for both current and future use. Although there is no 'one-size-fits-all' solution, this instalment outlines a generic framework for e-mail curation and preservation, provides a summary of current approaches, and addresses the technical, organisational and cultural challenges to successful e-mail management and longer-term curation.

    Durable Digital Objects Rather Than Digital Preservation

    Get PDF
    Long-term digital preservation is not the best available objective. Instead, what information producers and consumers almost surely want is a universe of durable digital objectsā€”documents and programs that will be as accessible and useful a century from now as they are today. Given the will, we could implement and deploy a practical and pleasing durability infrastructure within two years. Tools for daily work can embed packaging for durability without much burdening their users. Moving responsibility for durability from archival employees to information producers would also avoid burdening repositories with keeping up with Internet scale. An engineering prescription is available. Research librariesā€™ and archivesā€™ slow advance towards practical preservation of digital content is remarkable to outsiders. Why does their progress seem stalled? Ineffective collaboration across disciplinary boundaries has surely been a major impediment. We speculate about cultural reasons for this situation and warn about possible marginalization of research librarianship as a profession.

    Managing Research Data: Gravitational Waves

    Get PDF
    The project which led to this report was funded by JISC in 2010ā€“2011 as part of its ā€˜Managing Research Dataā€™ programme, to examine the way in which Big Science data is managed, and produce any recommendations which may be appropriate. Big science data is different: it comes in large volumes, and it is shared and exploited in ways which may differ from other disciplines. This project has explored these differences using as a case-study Gravitational Wave data generated by the LSC, and has produced recommendations intended to be useful variously to JISC, the funding council (STFC) and the LSC community. In Sect. 1 we deļ¬ne what we mean by ā€˜big scienceā€™, describe the overall data culture there, laying stress on how it necessarily or contingently differs from other disciplines. In Sect. 2 we discuss the beneļ¬ts of a formal data-preservation strategy, and the cases for open data and for well-preserved data that follow from that. This leads to our recommendations that, in essence, funders should adopt rather light-touch prescriptions regarding data preservation planning: normal data management practice, in the areas under study, corresponds to notably good practice in most other areas, so that the only change we suggest is to make this planning more formal, which makes it more easily auditable, and more amenable to constructive criticism. In Sect. 3 we brieļ¬‚y discuss the LIGO data management plan, and pull together whatever information is available on the estimation of digital preservation costs. The report is informed, throughout, by the OAIS reference model for an open archive. Some of the reportā€™s ļ¬ndings and conclusions were summarised in [1]. See the document history on page 37

    Durable Digital Objects Rather Than Digital Preservation

    Get PDF
    Long-term digital preservation is not the best available objective. Instead, what information producers and consumers almost surely want is a universe of durable digital objectsā€”documents and programs that are as accessible and useful a century from now as they are today. Given the will, we could implement and deploy a practical and pleasing durability infrastructure within two years. Tools for daily work can embed packaging for durability without much burdening their users. Moving responsibility for durability from archival employees to information producers also avoids burdening repositories with keeping up with Internet scale. An engineering prescription is available. Research librariesā€™ and archivesā€™ slow advance towards practical preservation of digital content is remarkable to outsiders. Why is their progress stalled? Ineffective collaboration across disciplinary boundaries has surely been a major impediment. We speculate about cultural reasons for this situation and warn about possible marginalization of research librarianship as a profession.

    Planets: Integrated Services for Digital Preservation

    Get PDF
    The Planets Project is developing services and technology to address core challenges in digital preservation. This article introduces the motivation for this work, describes the extensible technical architecture and places the Planets approach into the context of the Open Archival Information System (OAIS) Reference Model. It also provides a scenario demonstrating Planetsā€™ usefulness in solving real-life digital preservation problems and an overview of the projectā€™s progress to date

    Managing Research Data in Big Science

    Get PDF
    The project which led to this report was funded by JISC in 2010--2011 as part of its 'Managing Research Data' programme, to examine the way in which Big Science data is managed, and produce any recommendations which may be appropriate. Big science data is different: it comes in large volumes, and it is shared and exploited in ways which may differ from other disciplines. This project has explored these differences using as a case-study Gravitational Wave data generated by the LSC, and has produced recommendations intended to be useful variously to JISC, the funding council (STFC) and the LSC community. In Sect. 1 we define what we mean by 'big science', describe the overall data culture there, laying stress on how it necessarily or contingently differs from other disciplines. In Sect. 2 we discuss the benefits of a formal data-preservation strategy, and the cases for open data and for well-preserved data that follow from that. This leads to our recommendations that, in essence, funders should adopt rather light-touch prescriptions regarding data preservation planning: normal data management practice, in the areas under study, corresponds to notably good practice in most other areas, so that the only change we suggest is to make this planning more formal, which makes it more easily auditable, and more amenable to constructive criticism. In Sect. 3 we briefly discuss the LIGO data management plan, and pull together whatever information is available on the estimation of digital preservation costs. The report is informed, throughout, by the OAIS reference model for an open archive

    Audit and Certification of Digital Repositories: Creating a Mandate for the Digital Curation Centre (DCC)

    Get PDF
    The article examines the issues surrounding the audit and certification of digital repositories in light of the work that the RLG/NARA Task Force did to draw up guidelines and the need for these guidelines to be validated.

    Repository of NSF Funded Publications and Data Sets: "Back of Envelope" 15 year Cost Estimate

    Get PDF
    In this back of envelope study we calculate the 15 year fixed and variable costs of setting up and running a data repository (or database) to store and serve the publications and datasets derived from research funded by the National Science Foundation (NSF). Costs are computed on a yearly basis using a fixed estimate of the number of papers that are published each year that list NSF as their funding agency. We assume each paper has one dataset and estimate the size of that dataset based on experience. By our estimates, the number of papers generated each year is 64,340. The average dataset size over all seven directorates of NSF is 32 gigabytes (GB). A total amount of data added to the repository is two petabytes (PB) per year, or 30 PB over 15 years. The architecture of the data/paper repository is based on a hierarchical storage model that uses a combination of fast disk for rapid access and tape for high reliability and cost efficient long-term storage. Data are ingested through workflows that are used in university institutional repositories, which add metadata and ensure data integrity. Average fixed costs is approximately .0.90/GBover15āˆ’yearspan.Variablecostsareestimatedataslidingscaleof.0.90/GB over 15-year span. Variable costs are estimated at a sliding scale of 150 - 100pernewdatasetforupāˆ’frontcuration,or100 per new dataset for up-front curation, or 4.87 ā€“ 3.22perGB.Variablecostsreflecta3Thetotalprojectedcostofthedataandpaperrepositoryisestimatedat3.22 per GB. Variable costs reflect a 3% annual decrease in curation costs as efficiency and automated metadata and provenance capture are anticipated to help reduce what are now largely manual curation efforts. The total projected cost of the data and paper repository is estimated at 167,000,000 over 15 years of operation, curating close to one million of datasets and one million papers. After 15 years and 30 PB of data accumulated and curated, we estimate the cost per gigabyte at 5.56.This5.56. This 167 million cost is a direct cost in that it does not include federally allowable indirect costs return (ICR). After 15 years, it is reasonable to assume that some datasets will be compressed and rarely accessed. Others may be deemed no longer valuable, e.g., because they are replaced by more accurate results. Therefore, at some point the data growth in the repository will need to be adjusted by use of strategic preservation
    • ā€¦
    corecore