68 research outputs found

    Disaster planning for digital repositories

    Full text link
    This study examines how digital repositories with a preservation mandate are engaging in disaster planning, particularly in relation to their pursuit of trusted digital repository status. For those that are engaging in disaster planning, the study examines the creation of formal disaster response and recovery plans. Findings indicate that the process of going through an audit for certification as a trusted repository provides the incentive needed for the creation of formalized disaster planning documentation, and that repositories struggle with making their documentation available. This study also finds several significant obstacles with regard to the creation of formal disaster planning documentation, including the efforts required to get buy‐in from different functional areas within the organization, difficulty collaborating with the IT department, and the amount of time required for completion of the documentation.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/106841/1/14505001058_ftp.pd

    Disaster Planning and Trustworthy Digital Repositories

    Get PDF
    Master's ThesisThe goal of this study is to understand if digital repositories that have a preservation mandate are engaging in disaster planning, particularly in relation to their pursuit of trusted digital repository status. For those that are engaging in disaster planning, the study examines the creation of formal disaster response and recovery plans, finding that in most cases the process of going through an audit for certification as a trusted repository provides the impetus for the creation of formalized disaster planning documentation. This paper also discusses obstacles that repositories encounter and finds that most repositories struggle with making their documentation available.https://deepblue.lib.umich.edu/bitstream/2027.42/137664/1/Frank_MSI_Thesis_DeepBlue.pdfDescription of Frank_MSI_Thesis_DeepBlue.pdf : Thesi

    AJPM Focus

    Get PDF
    Introduction:There is limited recent information regarding the impact of interpersonal violence on an individual\u2019s non-health-related experiences and attainment, including criminal activity, education, employment, family status, housing, income, quality of life, or wealth. This study aimed to identify publicly available representative data sources to measure the socioeconomic impact of experiencing interpersonal violence in the U.S.Methods:In 2022, the authors reviewed data sources indexed in Data.gov, the Inter-university Consortium for Political and Social Research data archive, and the U.S. Census Bureau\u2019s Federal Statistical Research Data Center network to identify sources that reported both nonfatal violence exposure and socioeconomic status\u2014or data sources linking opportunities to achieve both measures\u2014over time (i.e., longitudinal/repeated cross-sections) at the individual level. Relevant data sources were characterized in terms of data type (e.g., survey), violence measure type (e.g., intimate partner violence), socioeconomic measure type (e.g., income), data years, and geographic coverage.Results:Sixteen data sources were identified. Adverse childhood experiences, intimate partner violence, and sexual violence were the most common types of violence faced. Income, education, and family status were the most common socioeconomic measures. Linked administrative data offered the broadest and the most in-depth analytical opportunities.Conclusions:Currently, linked administrative data appears to offer the most comprehensive opportunities to examine the long-term impact of violence on individuals\u2019 livelihoods. This type of data infrastructure may provide cost-effective research opportunities to better understand the elements of the economic burden of violence and improve targeting of prevention strategies.CC999999/ImCDC/Intramural CDC HHSUnited States

    How Important Are Data Curation Activities to Researchers? Gaps and Opportunities for Academic Libraries

    Get PDF
    Introduction: Data curation may be an emerging service for academic libraries, but researchers actively “curate” their data in a number of ways—even if terminology may not always align. Building on past user-needs assessments performed via survey and focus groups, the authors sought direct input from researchers on the importance and utilization of specific data curation activities. Methods: Between October 21, 2016, and November 18, 2016, the study team held focus groups with 91 participants at six different academic institutions to determine which data curation activities were most important to researchers, which activities were currently underway for their data, and how satisfied they were with the results. Results: Researchers are actively engaged in a variety of data curation activities, and while they considered most data curation activities to be highly important, a majority of the sample reported dissatisfaction with the current state of data curation at their institution. Discussion: Our findings demonstrate specific gaps and opportunities for academic libraries to focus their data curation services to more effectively meet researcher needs. Conclusion: Research libraries stand to benefit their users by emphasizing, investing in, and/or heavily promoting the highly valued services that may not currently be in use by many researchers

    Committing to Data Quality Review

    Full text link

    Planning for the Lifecycle Management and Long-Term Preservation of Research Data: A Federated Approach

    Get PDF
    Outcomes of the grant are archived here.The “data deluge” is a recent but increasingly well-understood phenomenon of scientific and social inquiry. Large-scale research instruments extend our observational power by many orders of magnitude but at the same time generate massive amounts of data. Researchers work feverishly to document and preserve changing or disappearing habitats, cultures, languages, and artifacts resulting in volumes of media in various formats. New software tools mine a growing universe of historical and modern texts and connect the dots in our semantic environment. Libraries, archives, and museums undertake digitization programs creating broad access to unique cultural heritage resources for research. Global-scale research collaborations with hundreds or thousands of participants, drive the creation of massive amounts of data, most of which cannot be recreated if lost. The University of Kansas (KU) Libraries in collaboration with two partners, the Greater Western Library Alliance (GWLA) and the Great Plains Network (GPN), received an IMLS National Leadership Grant designed to leverage collective strengths and create a proposal for a scalable and federated approach to the lifecycle management of research data based on the needs of GPN and GWLA member institutions.Institute for Museum and Library Services LG-51-12-0695-1

    Results of the Fall 2016 Researcher Engagement Sessions

    Get PDF
    “Planning the Data Curation Network” funded 2016-2017 by the Alfred P. Sloan Foundation grant G-2016-704

    Committing to Data Quality Review

    Get PDF
    Amid the pressure and enthusiasm for researchers to share data, a rapidly growing number of tools and services have emerged. What do we know about the quality of these data? Why does quality matter? And who should be responsible for data quality? We believe an essential measure of data quality is the ability to engage in informed reuse, which requires that data are independently understandable. In practice, this means that data must undergo quality review, a process whereby data and associated files are assessed and required actions are taken to ensure files are independently understandable for informed reuse. This paper explains what we mean by data quality review, what measures can be applied to it, and how it is practiced in three domain-specific archives. We explore a selection of other data repositories in the research data ecosystem, as well as the roles of researchers, academic libraries, and scholarly journals in regard to their application of data quality measures in practice. We end with thoughts about the need to commit to data quality and who might be able to take on those tasks
    corecore