13,679 research outputs found

    Making sense: talking data management with researchers

    Get PDF
    Incremental is one of eight projects in the JISC Managing Research Data programme funded to identify institutional requirements for digital research data management and pilot relevant infrastructure. Our findings concur with those of other Managing Research Data projects, as well as with several previous studies. We found that many researchers: (i) organise their data in an ad hoc fashion, posing difficulties with retrieval and re-use; (ii) store their data on all kinds of media without always considering security and back-up; (iii) are positive about data sharing in principle though reluctant in practice; (iv) believe back-up is equivalent to preservation. <br></br><br></br> The key difference between our approach and that of other Managing Research Data projects is the type of infrastructure we are piloting. While the majority of these projects focus on developing technical solutions, we are focusing on the need for ‘soft’ infrastructure, such as one-to-one tailored support, training, and easy-to-find, concise guidance that breaks down some of the barriers information professionals have unintentionally built with their use of specialist terminology. <br></br><br></br> We are employing a bottom-up approach as we feel that to support the step-by-step development of sound research data management practices, you must first understand researchers’ needs and perspectives. Over the life of the project, Incremental staff will act as mediators, assisting researchers and local support staff to understand the data management requirements within which they are expect to work, and will determine how these can be addressed within research workflows and the existing technical infrastructure. <br></br> <br></br> Our primary goal is to build data management capacity within the Universities of Cambridge and Glasgow by raising awareness of basic principles so everyone can manage their data to a certain extent. We will ensure our lessons can be picked up and used by other institutions. Our affiliation with the Digital Curation Centre and Digital Preservation Coalition will assist in this and all outputs will be released under a Creative Commons licence. The key difference between our approach and that of other MRD projects is the type of ‘infrastructure’ we are piloting. While the majority of these projects focus on developing technical solutions, we are focusing on the need for ‘soft’ infrastructure, such as one-to-one tailored support, training, and easy-to-find, concise guidance that breaks down some of the barriers information professionals have unintentionally built with their use of specialist terminology. We are employing a bottom-up approach as we feel that to support the step-by-step development of sound research data management practices, you must first understand researchers’ needs and perspectives. Over the life of the project, Incremental staff will act as mediators, assisting researchers and local support staff to understand the data management requirements within which they are expect to work, and will determine how these can be addressed within research workflows and the existing technical infrastructure. Our primary goal is to build data management capacity within the Universities of Cambridge and Glasgow by raising awareness of basic principles so everyone can manage their data to a certain extent. We’re achieving this by: - re-positioning existing guidance so researchers can locate the advice they need; - connecting researchers with one-to-one advice, support and partnering; - offering practical training and a seminar series to address key data management topics. We will ensure our lessons can be picked up and used by other institutions. Our affiliation with the Digital Curation Centre and Digital Preservation Coalition will assist in this and all outputs will be released under a Creative Commons licence

    Institutional Challenges in the Data Decade

    No full text
    Throughout the year, the DCC stages regional data management roadshows to present best practice and showcase new tools and resources. This article reports on the second roadshow, organised in conjunction with the White Rose University Consortium and held on 1-3 March 2011 at the University of Sheffield. The goal for Day 1 was to describe the emerging trends and challenges associated with research data management and their potential impact on higher education institutions, and to introduce the Digital Curation Centre (DCC) and its role in supporting research data management. This was achieved through a substantial morning presentation followed by an afternoon of illustrative case studies at both disciplinary and institutional levels, highlighting different models, approaches and working practice. Day 2 was aimed at those in senior management roles and looked at strategic and policy implementation objectives. The Day 3 workshop explored data management requirements from the perspective of the institution and the main UK funding bodies, the different roles and responsibilities involved in effective data management and provided an introduction to data management planning. The portfolio of DCC resources, tools and services was explored in greater detail. The roadshow provided delegates with advice and guidance to support institutional Research Data Management and has helped to facilitate regional networking and the exchange of skills and experience

    Institutional Challenges in the Data Decade

    No full text
    Throughout the year, the DCC stages regional data management roadshows to present best practice and showcase new tools and resources. This article reports on the second roadshow, organised in conjunction with the White Rose University Consortium and held on 1-3 March 2011 at the University of Sheffield. The goal for Day 1 was to describe the emerging trends and challenges associated with research data management and their potential impact on higher education institutions, and to introduce the Digital Curation Centre (DCC) and its role in supporting research data management. This was achieved through a substantial morning presentation followed by an afternoon of illustrative case studies at both disciplinary and institutional levels, highlighting different models, approaches and working practice. Day 2 was aimed at those in senior management roles and looked at strategic and policy implementation objectives. The Day 3 workshop explored data management requirements from the perspective of the institution and the main UK funding bodies, the different roles and responsibilities involved in effective data management and provided an introduction to data management planning. The portfolio of DCC resources, tools and services was explored in greater detail. The roadshow provided delegates with advice and guidance to support institutional Research Data Management and has helped to facilitate regional networking and the exchange of skills and experience

    Education alignment

    Get PDF
    This essay reviews recent developments in embedding data management and curation skills into information technology, library and information science, and research-based postgraduate courses in various national contexts. The essay also investigates means of joining up formal education with professional development training opportunities more coherently. The potential for using professional internships as a means of improving communication and understanding between disciplines is also explored. A key aim of this essay is to identify what level of complementarity is needed across various disciplines to most effectively and efficiently support the entire data curation lifecycle

    Keeping Research Data Safe 2: Final Report

    Get PDF
    The first Keeping Research Data Safe study funded by JISC made a major contribution to understanding of long-term preservation costs for research data by developing a cost model and indentifying cost variables for preserving research data in UK universities (Beagrie et al, 2008). However it was completed over a very constrained timescale of four months with little opportunity to follow up other major issues or sources of preservation cost information it identified. It noted that digital preservation costs are notoriously difficult to address in part because of the absence of good case studies and longitudinal information for digital preservation costs or cost variables. In January 2009 JISC issued an ITT for a study on the identification of long-lived digital datasets for the purposes of cost analysis. The aim of this work was to provide a larger body of material and evidence against which existing and future data preservation cost modelling exercises could be tested and validated. The proposal for the KRDS2 study was submitted in response by a consortium consisting of 4 partners involved in the original Keeping Research Data Safe study (Universities of Cambridge and Southampton, Charles Beagrie Ltd, and OCLC Research) and 4 new partners with significant data collections and interests in preservation costs (Archaeology Data Service, University of London Computer Centre, University of Oxford, and the UK Data Archive). A range of supplementary materials in support of this main report have been made available on the KRDS2 project website at http://www.beagrie.com/jisc.php. That website will be maintained and continuously updated with future work as a resource for KRDS users

    An Exploratory Sequential Mixed Methods Approach to Understanding Researchers’ Data Management Practices at UVM: Integrated Findings to Develop Research Data Services

    Get PDF
    This article reports on the integrated findings of an exploratory sequential mixed methods research design aimed to understand data management behaviors and challenges of faculty at the University of Vermont (UVM) in order to develop relevant research data services. The exploratory sequential mixed methods design is characterized by an initial qualitative phase of data collection and analysis, followed by a phase of quantitative data collection and analysis, with a final phase of integration or linking of data from the two separate strands of data. A joint display was used to integrate data focused on the three primary research questions: How do faculty at UVM manage their research data, in particular how do they share and preserve data in the long-term?; What challenges or barriers do UVM faculty face in effectively managing their research data?; and What institutional data management support or services are UVM faculty interested in? As a result of the analysis, this study suggests four major areas of research data services for UVM to address: infrastructure, metadata, data analysis and statistical support, and informational research data services. The implementation of these potential areas of research data services is underscored by the need for cross-campus collaboration and support

    D3.2 Cost Concept Model and Gateway Specification

    Get PDF
    This document introduces a Framework supporting the implementation of a cost concept model against which current and future cost models for curating digital assets can be benchmarked. The value built into this cost concept model leverages the comprehensive engagement by the 4C project with various user communities and builds upon our understanding of the requirements, drivers, obstacles and objectives that various stakeholder groups have relating to digital curation. Ultimately, this concept model should provide a critical input to the development and refinement of cost models as well as helping to ensure that the curation and preservation solutions and services that will inevitably arise from the commercial sector as ‘supply’ respond to a much better understood ‘demand’ for cost-effective and relevant tools. To meet acknowledged gaps in current provision, a nested model of curation which addresses both costs and benefits is provided. The goal of this task was not to create a single, functionally implementable cost modelling application; but rather to design a model based on common concepts and to develop a generic gateway specification that can be used by future model developers, service and solution providers, and by researchers in follow-up research and development projects.<p></p> The Framework includes:<p></p> • A Cost Concept Model—which defines the core concepts that should be included in curation costs models;<p></p> • An Implementation Guide—for the cost concept model that provides guidance and proposes questions that should be considered when developing new cost models and refining existing cost models;<p></p> • A Gateway Specification Template—which provides standard metadata for each of the core cost concepts and is intended for use by future model developers, model users, and service and solution providers to promote interoperability;<p></p> • A Nested Model for Digital Curation—that visualises the core concepts, demonstrates how they interact and places them into context visually by linking them to A Cost and Benefit Model for Curation.<p></p> This Framework provides guidance for data collection and associated calculations in an operational context but will also provide a critical foundation for more strategic thinking around curation such as the Economic Sustainability Reference Model (ESRM).<p></p> Where appropriate, definitions of terms are provided, recommendations are made, and examples from existing models are used to illustrate the principles of the framework

    Requirements for Provenance on the Web

    Get PDF
    From where did this tweet originate? Was this quote from the New York Times modified? Daily, we rely on data from the Web but often it is difficult or impossible to determine where it came from or how it was produced. This lack of provenance is particularly evident when people and systems deal with Web information or with any environment where information comes from sources of varying quality. Provenance is not captured pervasively in information systems. There are major technical, social, and economic impediments that stand in the way of using provenance effectively. This paper synthesizes requirements for provenance on the Web for a number of dimensions focusing on three key aspects of provenance: the content of provenance, the management of provenance records, and the uses of provenance information. To illustrate these requirements, we use three synthesized scenarios that encompass provenance problems faced by Web users toda

    UK innovation support for energy demand reduction

    Get PDF
    corecore