23,028 research outputs found

    Business Process Risk Management and Simulation Modelling for Digital Audio-Visual Media Preservation.

    Get PDF
    Digitised and born-digital Audio-Visual (AV) content presents new challenges for preservation and Quality Assurance (QA) to ensure that cultural heritage is accessible for the long term. Digital archives have developed strategies for avoiding, mitigating and recovering from digital AV loss using IT-based systems, involving QA tools before ingesting files into the archive and utilising file-based replication to repair files that may be damaged while in the archive. However, while existing strategies are effective for addressing issues related to media degradation, issues such as format obsolescence and failures in processes and people pose significant risk to the long-term value of digital AV content. We present a Business Process Risk management framework (BPRisk) designed to support preservation experts in managing risks to long-term digital media preservation. This framework combines workflow and risk specification within a single risk management process designed to support continual improvement of workflows. A semantic model has been developed that allows the framework to incorporate expert knowledge from both preservation and security experts in order to intelligently aid workflow designers in creating and optimising workflows. The framework also provides workflow simulation functionality, allowing users to a) understand the key vulnerabilities in the workflows, b) target investments to address those vulnerabilities, and c) minimise the economic consequences of risks. The application of the BPRisk framework is demonstrated on a use case with the Austrian Broadcasting Corporation (ORF), discussing simulation results and an evaluation against the outcomes of executing the planned workflow

    Looking Forward to Look Back: Digital Preservation Planning

    Get PDF
    Digital information resources are a vitally important and increasingly large component of academic libraries’ collection and preservation responsibilities. This includes content converted to and originating from digital form (born-digital). Preserving digital material, such as social media and websites, is essential for ensuring that future generations know everyone’s story, especially those groups which have been historically underrepresented in official records. This presentation will detail the steps undertaken by a digital preservation task force to first assess the weaknesses in current practice, and then develop a plan to implement a digital preservation policy and workflow. As part of the project, the task force compiled and evaluated digital preservation policies from several academic libraries, created an RFI, and invited vendors to campus. Initiated by the library, digital preservation involves many stakeholders on campus who were included in this process. Even with varying resources and technical expertise, attendees will be empowered to start the process of creating their own digital preservation policy and plan. Addressing digital preservation is daunting, but the first step is to act

    Demystifying Digital Records Processing... Step by Step, Byte by Byte

    Get PDF
    After briefly providing some background information on the acquisition of a digital processing workstation and the development of digital records processing workflows, this hands-on work session will provide an overview of at least one possible workflow for accessioning and processing born digital archival records. Attendees will walk through the process themselves and learn how to use specific (free) tools to better understand and process the materials at hand. Some possible tasks could include documenting basic file structure for ingesting and processing digital files, generating basic metadata about files (e.g. identify the number, size, and types/formats of files), identifying duplicate files and/or empty directories, and creating checksums (as a baseline for preservation). Come and take the next step with us

    Designing an automated prototype tool for preservation quality metadata extraction for ingest into digital repository

    Get PDF
    We present a viable framework for the automated extraction of preservation quality metadata, which is adjusted to meet the needs of, ingest to digital repositories. It has three distinctive features: wide coverage, specialisation and emphasis on quality. Wide coverage is achieved through the use of a distributed system of tool repositories, which helps to implement it over a broad range of document object types. Specialisation is maintained through the selection of the most appropriate metadata extraction tool for each case based on the identification of the digital object genre. And quality is sustained by introducing control points at selected stages of the workflow of the system. The integration of these three features as components in the ingest of material into digital repositories is a defining step ahead in the current quest for improved management of digital resources

    The LIFE2 final project report

    Get PDF
    Executive summary: The first phase of LIFE (Lifecycle Information For E-Literature) made a major contribution to understanding the long-term costs of digital preservation; an essential step in helping institutions plan for the future. The LIFE work models the digital lifecycle and calculates the costs of preserving digital information for future years. Organisations can apply this process in order to understand costs and plan effectively for the preservation of their digital collections The second phase of the LIFE Project, LIFE2, has refined the LIFE Model adding three new exemplar Case Studies to further build upon LIFE1. LIFE2 is an 18-month JISC-funded project between UCL (University College London) and The British Library (BL), supported by the LIBER Access and Preservation Divisions. LIFE2 began in March 2007, and completed in August 2008. The LIFE approach has been validated by a full independent economic review and has successfully produced an updated lifecycle costing model (LIFE Model v2) and digital preservation costing model (GPM v1.1). The LIFE Model has been tested with three further Case Studies including institutional repositories (SHERPA-LEAP), digital preservation services (SHERPA DP) and a comparison of analogue and digital collections (British Library Newspapers). These Case Studies were useful for scenario building and have fed back into both the LIFE Model and the LIFE Methodology. The experiences of implementing the Case Studies indicated that enhancements made to the LIFE Methodology, Model and associated tools have simplified the costing process. Mapping a specific lifecycle to the LIFE Model isn’t always a straightforward process. The revised and more detailed Model has reduced ambiguity. The costing templates, which were refined throughout the process of developing the Case Studies, ensure clear articulation of both working and cost figures, and facilitate comparative analysis between different lifecycles. The LIFE work has been successfully disseminated throughout the digital preservation and HE communities. Early adopters of the work include the Royal Danish Library, State Archives and the State and University Library, Denmark as well as the LIFE2 Project partners. Furthermore, interest in the LIFE work has not been limited to these sectors, with interest in LIFE expressed by local government, records offices, and private industry. LIFE has also provided input into the LC-JISC Blue Ribbon Task Force on the Economic Sustainability of Digital Preservation. Moving forward our ability to cost the digital preservation lifecycle will require further investment in costing tools and models. Developments in estimative models will be needed to support planning activities, both at a collection management level and at a later preservation planning level once a collection has been acquired. In order to support these developments a greater volume of raw cost data will be required to inform and test new cost models. This volume of data cannot be supported via the Case Study approach, and the LIFE team would suggest that a software tool would provide the volume of costing data necessary to provide a truly accurate predictive model

    Transfer and Inventory Components of Developing Repository Services

    Get PDF
    4th International Conference on Open RepositoriesThis presentation was part of the session : Conference PresentationsDate: 2009-05-19 10:00 AM – 11:30 AMAt the Library of Congress, our most basic data management needs are not surprising: How do we know what we have, where it is, and who it belongs to? How do we get files "new and legacy" from where they are to where they need to be? And how do we record and track events in the life cycle of our files? This presentation describes current work at the Library in implementing tools to meet these needs as a set of modular services -- Transfer, Transport, and Inventory -- that will fit into a larger scheme of repository services to be developed. These modular services do not equate to everything needed to call a system a repository. But this is a set of services that equate to many aspects of "ingest" and "archiving" the registry of a deposit activity, the controlled transfer and transport of files, and an inventory system that can be used to track files, record events in those files life cycles, and provide basic file-level discovery and auditing. This is the first stage in the development of a suite of tools to help the Library ensure long-term stewardship of its digital assets

    Digital Dixie: Processing Born Digital Materials in the Southern Historical Collection

    Get PDF
    Archives are increasingly accessioning digital materials as part of collections of personal papers. Proper preservation of these digital items requires archivists to add several new steps to their processing workflow. This paper discusses the steps developed to remove digital files from the media on which they are housed in the Southern Historical Collection of the University of North Carolina at Chapel Hill's Wilson Library. The paper is divided into three major sections. The first section examines literature related to digital archaeology, computer forensics and digital preservation. The second section describes the Southern Historical Collection, its technological environment and the process of developing a workflow. The paper concludes with a discussion of lessons learned from the project, unresolved issues and potential solutions

    Curation and preservation of complex data: North Carolina Geospatial Data Archiving Project

    Get PDF
    The North Carolina Geospatial Data Archiving Project (NCGDAP) is a three-year joint effort of the North Carolina State University Libraries and the North Carolina Center for Geographic Information and Analysis focused on collection and preservation of digital geospatial data resources from state and local government agencies. CGDAP is being undertaken in partnership with the Library of Congress under the ational Digital Information Infrastructure and Preservation Program (NDIIPP). “Digital geospatial data” consists of digital information that identifies the geographic location and characteristics of natural or constructed features and boundaries on the earth. Such data resources include geographic information systems (GIS) data sets, digitized maps, remote sensing data resources such as digital aerial photography, and tabular data that are tied to specific locations. These complex data objects do not suffer well from neglect, and long-term preservation will involve some combination of format migration and retention of critical documentation. While the main focus of NCGDAP is on organizational issues related to the engagement of spatial data infrastructure in the process of data archiving--with the demonstration repository seen more as a catalyst for discussion rather than an end in itself--this paper focuses more narrowly on the technical challenges associated with eveloping an ingest workflow and archive development process. New preservation hallenges associated with emergent content forms are also resented

    Digital Preservation Services : State of the Art Analysis

    Get PDF
    Research report funded by the DC-NET project.An overview of the state of the art in service provision for digital preservation and curation. Its focus is on the areas where bridging the gaps is needed between e-Infrastructures and efficient and forward-looking digital preservation services. Based on a desktop study and a rapid analysis of some 190 currently available tools and services for digital preservation, the deliverable provides a high-level view on the range of instruments currently on offer to support various functions within a preservation system.European Commission, FP7peer-reviewe

    Expressing the tacit knowledge of a digital library system as linked data

    Get PDF
    Library organizations have enthusiastically undertaken semantic web initiatives and in particular the data publishing as linked data. Nevertheless, different surveys report the experimental nature of initiatives and the consumer difficulty in re-using data. These barriers are a hindrance for using linked datasets, as an infrastructure that enhances the library and related information services. This paper presents an approach for encoding, as a Linked Vocabulary, the "tacit" knowledge of the information system that manages the data source. The objective is the improvement of the interpretation process of the linked data meaning of published datasets. We analyzed a digital library system, as a case study, for prototyping the "semantic data management" method, where data and its knowledge are natively managed, taking into account the linked data pillars. The ultimate objective of the semantic data management is to curate the correct consumers' interpretation of data, and to facilitate the proper re-use. The prototype defines the ontological entities representing the knowledge, of the digital library system, that is not stored in the data source, nor in the existing ontologies related to the system's semantics. Thus we present the local ontology and its matching with existing ontologies, Preservation Metadata Implementation Strategies (PREMIS) and Metadata Objects Description Schema (MODS), and we discuss linked data triples prototyped from the legacy relational database, by using the local ontology. We show how the semantic data management, can deal with the inconsistency of system data, and we conclude that a specific change in the system developer mindset, it is necessary for extracting and "codifying" the tacit knowledge, which is necessary to improve the data interpretation process
    corecore