3,141 research outputs found

    Learning from Las Vegas: Adapting Workflows for Managing Born-Digital Design Records

    Get PDF
    Architecture collections have been a mainstay for Special Collections and Archives at the University of Nevada, Las Vegas (UNLV SCA), since the late 1970s. Until 2017, most architecture collections in Special Collections and Archives have consisted of physical records. In recent years, curators began acquiring architecture collections with significant born-digital content, which present unique challenges different from other types of born-digital materials. This case study discusses how staff adapted existing workflows for born-digital materials to process and describe two collections comprised of born-digital architecture and design records. The authors also describe how UNLV SCA provides access to proprietary design files through the creation of access surrogates. Lessons learned from adapting workflows and processing these collections are detailed, as well as future steps for continuing the development of workflows and policies for managing born-digital architecture and design records

    Processing Internal Hard Drives - cover page

    Get PDF
    As archives receive born digital materials more and more frequently, the challenge of dealing with a variety of hardware and formats is becoming omnipresent. This paper outlines a case study that provides a practical, step-by-step guide to archiving files on legacy hard drives dating from the early 1990s to the mid-2000s. The project used a digital forensics approach to provide access to the contents of the hard drives without compromising the integrity of the files. Relying largely on open source software, the project imaged each hard drive in its entirety, then identified folders and individual files of potential high use for upload to the University of Texas Digital Repository. The project also experimented with data visualizations in order to provide researchers who would not have access to the full disk images—a sense of the contents and context of the full drives. The greatest challenge philosophically was answering the question of whether scholars should be able to view deleted materials on the drives that donors may not have realized were accessible

    Processing Internal Hard Drives - no cover page

    Get PDF

    Digital Preservation Services : State of the Art Analysis

    Get PDF
    Research report funded by the DC-NET project.An overview of the state of the art in service provision for digital preservation and curation. Its focus is on the areas where bridging the gaps is needed between e-Infrastructures and efficient and forward-looking digital preservation services. Based on a desktop study and a rapid analysis of some 190 currently available tools and services for digital preservation, the deliverable provides a high-level view on the range of instruments currently on offer to support various functions within a preservation system.European Commission, FP7peer-reviewe

    Balancing Care and Authenticity in Digital Collections: A Radical Empathy Approach To Working With Disk Images

    Get PDF
    Both traditional recordkeeping and radical empathy frameworks ask us to carefully consider: the presence of sensitive information within digital content; those who created, are captured by, and are affected by a record (or the absence of that record); and the consequences of retaining or discarding that information. However, automated digital archiving workflows – in order to handle the scale and volume of digital content – discourage contextual and empathetic decision-making in favour of preselected decisions. This paper explores the implications on labor and privacy of the common practice to “take and keep it all” within the context of radical empathy. Practices which promote retention of complete disk images and encourage the creation of access copies with redacted sensitive data are vulnerable. The decision to discard must be deliberate and, often, must be enacted manually, outside of the workflow. The motivation for this model is that the researcher, archivist, curator, or librarian can always return to the original disk image in order to demonstrate authenticity, allow for emulation or access, or to generate new access copies. However, this practice poses ethical privacy concerns and does not demonstrate care. We recognize that the resources necessary to review disk images and make contextual decisions that balance both privacy and authenticity are sizable due to the manual nature of this work: this places strain and further labor on staff and practitioners using current digital archival and preservation tools. We proffer that there is a need to develop tools which aid in efficient and explicit redaction, but also allow for needed contextual and empathetic decision-making. Further we propose that more staff time is required to make these decisions and if that staff time is not available, then the institution should consider itself incapable of ethically stewarding the content and protecting those affected. Pre-print first published online 01/24/202

    EJT editorial standard for the semantic enhancement of specimen data in taxonomy literature

    Get PDF
    This paper describes a set of guidelines for the citation of zoological and botanical specimens in the European Journal of Taxonomy. The guidelines stipulate controlled vocabularies and precise formats for presenting the specimens examined within a taxonomic publication, which allow for the rich data associated with the primary research material to be harvested, distributed and interlinked online via international biodiversity data aggregators. Herein we explain how the EJT editorial standard was defined and how this initiative fits into the journal's project to semantically enhance its publications using the Plazi TaxPub DTD extension. By establishing a standardised format for the citation of taxonomic specimens, the journal intends to widen the distribution of and improve accessibility to the data it publishes. Authors who conform to these guidelines will benefit from higher visibility and new ways of visualising their work. In a wider context, we hope that other taxonomy journals will adopt this approach to their publications, adapting their working methods to enable domain-specific text mining to take place. If specimen data can be efficiently cited, harvested and linked to wider resources, we propose that there is also the potential to develop alternative metrics for assessing impact and productivity within the natural science

    Audiovisual Metadata Platform Pilot Development (AMPPD), Final Project Report

    Get PDF
    This report documents the experience and findings of the Audiovisual Metadata Platform Pilot Development (AMPPD) project, which has worked to enable more efficient generation of metadata to support discovery and use of digitized and born-digital audio and moving image collections. The AMPPD project was carried out by partners Indiana University Libraries, AVP, University of Texas at Austin, and New York Public Library between 2018-2021

    Discovery Tools and Local Metadata Requirements in Academic Libraries

    Get PDF
    As the second decade of the twenty-first century commences, academic librarians who work to promote collection access must not only contend with a vast array of content available in a wide range of formats, but they must also ensure that new technologies developed to accommodate user search behaviors yield satisfactory outcomes. Next generation discovery tools are designed to streamline the search process and facilitate better search results by incorporating metadata from proprietary and local collections, then by providing relevancy-ranked results. This paper investigates the implications of discovery tool use for accessing materials housed in institutional repositories and special collections, in particular, how the discovery of these materials depends on local metadata creation practices. This paper surveys current research pertaining to metadata quality issues that may put unique local collections at risk for being overlooked in meta-search relevancy rankings, and considers ways in which academic libraries can address this issue as well as areas for future research

    A Taxonomy of Data Grids for Distributed Data Sharing, Management and Processing

    Full text link
    Data Grids have been adopted as the platform for scientific communities that need to share, access, transport, process and manage large data collections distributed worldwide. They combine high-end computing technologies with high-performance networking and wide-area storage management techniques. In this paper, we discuss the key concepts behind Data Grids and compare them with other data sharing and distribution paradigms such as content delivery networks, peer-to-peer networks and distributed databases. We then provide comprehensive taxonomies that cover various aspects of architecture, data transportation, data replication and resource allocation and scheduling. Finally, we map the proposed taxonomy to various Data Grid systems not only to validate the taxonomy but also to identify areas for future exploration. Through this taxonomy, we aim to categorise existing systems to better understand their goals and their methodology. This would help evaluate their applicability for solving similar problems. This taxonomy also provides a "gap analysis" of this area through which researchers can potentially identify new issues for investigation. Finally, we hope that the proposed taxonomy and mapping also helps to provide an easy way for new practitioners to understand this complex area of research.Comment: 46 pages, 16 figures, Technical Repor

    Report of the Stanford Linked Data Workshop

    No full text
    The Stanford University Libraries and Academic Information Resources (SULAIR) with the Council on Library and Information Resources (CLIR) conducted at week-long workshop on the prospects for a large scale, multi-national, multi-institutional prototype of a Linked Data environment for discovery of and navigation among the rapidly, chaotically expanding array of academic information resources. As preparation for the workshop, CLIR sponsored a survey by Jerry Persons, Chief Information Architect emeritus of SULAIR that was published originally for workshop participants as background to the workshop and is now publicly available. The original intention of the workshop was to devise a plan for such a prototype. However, such was the diversity of knowledge, experience, and views of the potential of Linked Data approaches that the workshop participants turned to two more fundamental goals: building common understanding and enthusiasm on the one hand and identifying opportunities and challenges to be confronted in the preparation of the intended prototype and its operation on the other. In pursuit of those objectives, the workshop participants produced:1. a value statement addressing the question of why a Linked Data approach is worth prototyping;2. a manifesto for Linked Libraries (and Museums and Archives and …);3. an outline of the phases in a life cycle of Linked Data approaches;4. a prioritized list of known issues in generating, harvesting & using Linked Data;5. a workflow with notes for converting library bibliographic records and other academic metadata to URIs;6. examples of potential “killer apps” using Linked Data: and7. a list of next steps and potential projects.This report includes a summary of the workshop agenda, a chart showing the use of Linked Data in cultural heritage venues, and short biographies and statements from each of the participants
    corecore