53 research outputs found

    Technology responsiveness for digital preservation: a model

    Get PDF
    Digital preservation may be defined as the cumulative actions undertaken by an organisation or individual to ensure that digital content is usable across generations of information technology. As technological change occurs, the digital preservation community must detect relevant technology developments, determine their implications for preserving digital content, and develop timely and appropriate responses to take full advantage of progress and minimize obsolescence. This thesis discusses the results of an investigation of technology responsiveness for digital preservation. The research produced a technology response model that defines the roles, functions, and content component for technology responsiveness. The model built on the results of an exploration of the nature and meaning of technological change and an evaluation of existing technology responses that might be adapted for digital preservation. The development of the model followed the six-step process defined by constructive research methodology, an approach that is most commonly used in information technology research and that is extensible to digital preservation research. This thesis defines the term technology responsiveness as the ability to develop continually effective responses to ongoing technological change through iterative monitoring, assessment, and response using the technology response model for digital preservation

    Proceedings of the 12th International Conference on Digital Preservation

    Get PDF
    The 12th International Conference on Digital Preservation (iPRES) was held on November 2-6, 2015 in Chapel Hill, North Carolina, USA. There were 327 delegates from 22 countries. The program included 12 long papers, 15 short papers, 33 posters, 3 demos, 6 workshops, 3 tutorials and 5 panels, as well as several interactive sessions and a Digital Preservation Showcase

    Proceedings of the 12th International Conference on Digital Preservation

    Get PDF
    The 12th International Conference on Digital Preservation (iPRES) was held on November 2-6, 2015 in Chapel Hill, North Carolina, USA. There were 327 delegates from 22 countries. The program included 12 long papers, 15 short papers, 33 posters, 3 demos, 6 workshops, 3 tutorials and 5 panels, as well as several interactive sessions and a Digital Preservation Showcase

    Utilizing Provenance in Reusable Research Objects

    Full text link
    Science is conducted collaboratively, often requiring the sharing of knowledge about computational experiments. When experiments include only datasets, they can be shared using Uniform Resource Identifiers (URIs) or Digital Object Identifiers (DOIs). An experiment, however, seldom includes only datasets, but more often includes software, its past execution, provenance, and associated documentation. The Research Object has recently emerged as a comprehensive and systematic method for aggregation and identification of diverse elements of computational experiments. While a necessary method, mere aggregation is not sufficient for the sharing of computational experiments. Other users must be able to easily recompute on these shared research objects. Computational provenance is often the key to enable such reuse. In this paper, we show how reusable research objects can utilize provenance to correctly repeat a previous reference execution, to construct a subset of a research object for partial reuse, and to reuse existing contents of a research object for modified reuse. We describe two methods to summarize provenance that aid in understanding the contents and past executions of a research object. The first method obtains a process-view by collapsing low-level system information, and the second method obtains a summary graph by grouping related nodes and edges with the goal to obtain a graph view similar to application workflow. Through detailed experiments, we show the efficacy and efficiency of our algorithms.Comment: 25 page

    Managing Research Data: Gravitational Waves

    Get PDF
    The project which led to this report was funded by JISC in 2010–2011 as part of its ‘Managing Research Data’ programme, to examine the way in which Big Science data is managed, and produce any recommendations which may be appropriate. Big science data is different: it comes in large volumes, and it is shared and exploited in ways which may differ from other disciplines. This project has explored these differences using as a case-study Gravitational Wave data generated by the LSC, and has produced recommendations intended to be useful variously to JISC, the funding council (STFC) and the LSC community. In Sect. 1 we deïŹne what we mean by ‘big science’, describe the overall data culture there, laying stress on how it necessarily or contingently differs from other disciplines. In Sect. 2 we discuss the beneïŹts of a formal data-preservation strategy, and the cases for open data and for well-preserved data that follow from that. This leads to our recommendations that, in essence, funders should adopt rather light-touch prescriptions regarding data preservation planning: normal data management practice, in the areas under study, corresponds to notably good practice in most other areas, so that the only change we suggest is to make this planning more formal, which makes it more easily auditable, and more amenable to constructive criticism. In Sect. 3 we brieïŹ‚y discuss the LIGO data management plan, and pull together whatever information is available on the estimation of digital preservation costs. The report is informed, throughout, by the OAIS reference model for an open archive. Some of the report’s ïŹndings and conclusions were summarised in [1]. See the document history on page 37

    Managing Research Data in Big Science

    Get PDF
    The project which led to this report was funded by JISC in 2010--2011 as part of its 'Managing Research Data' programme, to examine the way in which Big Science data is managed, and produce any recommendations which may be appropriate. Big science data is different: it comes in large volumes, and it is shared and exploited in ways which may differ from other disciplines. This project has explored these differences using as a case-study Gravitational Wave data generated by the LSC, and has produced recommendations intended to be useful variously to JISC, the funding council (STFC) and the LSC community. In Sect. 1 we define what we mean by 'big science', describe the overall data culture there, laying stress on how it necessarily or contingently differs from other disciplines. In Sect. 2 we discuss the benefits of a formal data-preservation strategy, and the cases for open data and for well-preserved data that follow from that. This leads to our recommendations that, in essence, funders should adopt rather light-touch prescriptions regarding data preservation planning: normal data management practice, in the areas under study, corresponds to notably good practice in most other areas, so that the only change we suggest is to make this planning more formal, which makes it more easily auditable, and more amenable to constructive criticism. In Sect. 3 we briefly discuss the LIGO data management plan, and pull together whatever information is available on the estimation of digital preservation costs. The report is informed, throughout, by the OAIS reference model for an open archive

    Mapping Scholarly Communication Infrastructure: A Bibliographic Scan of Digital Scholarly Communication Infrastructure

    Get PDF
    This bibliography scan covers a lot of ground. In it, I have attempted to capture relevant recent literature across the whole of the digital scholarly communications infrastructure. I have used that literature to identify significant projects and then document them with descriptions and basic information. Structurally, this review has three parts. In the first, I begin with a diagram showing the way the projects reviewed fit into the research workflow; then I cover a number of topics and functional areas related to digital scholarly communication. I make no attempt to be comprehensive, especially regarding the technical literature; rather, I have tried to identify major articles and reports, particularly those addressing the library community. The second part of this review is a list of projects or programs arranged by broad functional categories. The third part lists individual projects and the organizations—both commercial and nonprofit—that support them. I have identified 206 projects. Of these, 139 are nonprofit and 67 are commercial. There are 17 organizations that support multiple projects, and six of these—Artefactual Systems, Atypon/Wiley, Clarivate Analytics, Digital Science, Elsevier, and MDPI—are commercial. The remaining 11—Center for Open Science, Collaborative Knowledge Foundation (Coko), LYRASIS/DuraSpace, Educopia Institute, Internet Archive, JISC, OCLC, OpenAIRE, Open Access Button, Our Research (formerly Impactstory), and the Public Knowledge Project—are nonprofit.Andrew W. Mellon Foundatio

    Education alignment

    Get PDF
    This essay reviews recent developments in embedding data management and curation skills into information technology, library and information science, and research-based postgraduate courses in various national contexts. The essay also investigates means of joining up formal education with professional development training opportunities more coherently. The potential for using professional internships as a means of improving communication and understanding between disciplines is also explored. A key aim of this essay is to identify what level of complementarity is needed across various disciplines to most effectively and efficiently support the entire data curation lifecycle
    • 

    corecore