46,277 research outputs found

    Education alignment

    Get PDF
    This essay reviews recent developments in embedding data management and curation skills into information technology, library and information science, and research-based postgraduate courses in various national contexts. The essay also investigates means of joining up formal education with professional development training opportunities more coherently. The potential for using professional internships as a means of improving communication and understanding between disciplines is also explored. A key aim of this essay is to identify what level of complementarity is needed across various disciplines to most effectively and efficiently support the entire data curation lifecycle

    Bringing self assessment home: repository profiling and key lines of enquiry within DRAMBORA

    Get PDF
    Digital repositories are a manifestation of complex organizational, financial, legal, technological, procedural, and political interrelationships. Accompanying each of these are innate uncertainties, exacerbated by the relative immaturity of understanding prevalent within the digital preservation domain. Recent efforts have sought to identify core characteristics that must be demonstrable by successful digital repositories, expressed in the form of check-list documents, intended to support the processes of repository accreditation and certification. In isolation though, the available guidelines lack practical applicability; confusion over evidential requirements and difficulties associated with the diversity that exists among repositories (in terms of mandate, available resources, supported content and legal context) are particularly problematic. A gap exists between the available criteria and the ways and extent to which conformity can be demonstrated. The Digital Repository Audit Method Based on Risk Assessment (DRAMBORA) is a methodology for undertaking repository self assessment, developed jointly by the Digital Curation Centre (DCC) and DigitalPreservationEurope (DPE). DRAMBORA requires repositories to expose their organization, policies and infrastructures to rigorous scrutiny through a series of highly structured exercises, enabling them to build a comprehensive registry of their most pertinent risks, arranged into a structure that facilitates effective management. It draws on experiences accumulated throughout 18 evaluative pilot assessments undertaken in an internationally diverse selection of repositories, digital libraries and data centres (including institutions and services such as the UK National Digital Archive of Datasets, the National Archives of Scotland, Gallica at the National Library of France and the CERN Document Server). Other organizations, such as the British Library, have been using sections of DRAMBORA within their own risk assessment procedures. Despite the attractive benefits of a bottom up approach, there are implicit challenges posed by neglecting a more objective perspective. Following a sustained period of pilot audits undertaken by DPE, DCC and the DELOS Digital Preservation Cluster aimed at evaluating DRAMBORA, it was stated that had respective project members not been present to facilitate each assessment, and contribute their objective, external perspectives, the results may have been less useful. Consequently, DRAMBORA has developed in a number of ways, to enable knowledge transfer from the responses of comparable repositories, and incorporate more opportunities for structured question sets, or key lines of enquiry, that provoke more comprehensive awareness of the applicability of particular threats and opportunities

    Digital and Media Literacy: A Plan of Action

    Get PDF
    Outlines a community education movement to implement Knight's 2009 recommendation to enhance digital and media literacy. Suggests local, regional, state, and national initiatives such as teacher education and parent outreach and discusses challenges

    Open Data, Grey Data, and Stewardship: Universities at the Privacy Frontier

    Full text link
    As universities recognize the inherent value in the data they collect and hold, they encounter unforeseen challenges in stewarding those data in ways that balance accountability, transparency, and protection of privacy, academic freedom, and intellectual property. Two parallel developments in academic data collection are converging: (1) open access requirements, whereby researchers must provide access to their data as a condition of obtaining grant funding or publishing results in journals; and (2) the vast accumulation of 'grey data' about individuals in their daily activities of research, teaching, learning, services, and administration. The boundaries between research and grey data are blurring, making it more difficult to assess the risks and responsibilities associated with any data collection. Many sets of data, both research and grey, fall outside privacy regulations such as HIPAA, FERPA, and PII. Universities are exploiting these data for research, learning analytics, faculty evaluation, strategic decisions, and other sensitive matters. Commercial entities are besieging universities with requests for access to data or for partnerships to mine them. The privacy frontier facing research universities spans open access practices, uses and misuses of data, public records requests, cyber risk, and curating data for privacy protection. This paper explores the competing values inherent in data stewardship and makes recommendations for practice, drawing on the pioneering work of the University of California in privacy and information security, data governance, and cyber risk.Comment: Final published version, Sept 30, 201

    LIBER's involvement in supporting digital preservation in member libraries

    Get PDF
    Digital curation and preservation represent new challenges for universities. LIBER has invested considerable effort to engage with the new agendas of digital preservation and digital curation. Through two successful phases of the LIFE project, LIBER is breaking new ground in identifying innovative models for costing digital curation and preservation. Through LIFE’s input into the US-UK Blue Ribbon Task Force on Sustainable Digital Preservation and Access, LIBER is aligned with major international work in the economics of digital preservation. In its emerging new strategy and structures, LIBER will continue to make substantial contributions in this area, mindful of the needs of European research libraries

    Why Modern Open Source Projects Fail

    Full text link
    Open source is experiencing a renaissance period, due to the appearance of modern platforms and workflows for developing and maintaining public code. As a result, developers are creating open source software at speeds never seen before. Consequently, these projects are also facing unprecedented mortality rates. To better understand the reasons for the failure of modern open source projects, this paper describes the results of a survey with the maintainers of 104 popular GitHub systems that have been deprecated. We provide a set of nine reasons for the failure of these open source projects. We also show that some maintenance practices -- specifically the adoption of contributing guidelines and continuous integration -- have an important association with a project failure or success. Finally, we discuss and reveal the principal strategies developers have tried to overcome the failure of the studied projects.Comment: Paper accepted at 25th International Symposium on the Foundations of Software Engineering (FSE), pages 1-11, 201

    Big data: the potential role of research data management and research data registries

    Get PDF
    Universities generate and hold increasingly vast quantities of research data – both in the form of large, well-structured datasets but more often in the form of a long tail of small, distributed datasets which collectively amount to ‘Big Data’ and offer significant potential for reuse. However, unlike big data, these collections of small data are often less well curated and are usually very difficult to find thereby reducing their potential reuse value. The Digital Curation Centre (DCC) works to support UK universities to better manage and expose their research data so that its full value may be realised. With a focus on tapping into this long tail of small data, this presentation will cover two main DCC, services: DMPonline which helps researchers to identify potentially valuable research data and to plan for its longer-term retention and reuse; and the UK pilot research data registry and discovery service (RDRDS) which will help to ensure that research data produced in UK HEIs can be found, understood, and reused. Initially we will introduce participants to the role of data management planning to open up dialogue between researchers and library services to ensure potentially valuable research data are managed appropriately and made available for reuse where feasible. DMPs provide institutions with valuable insights into the scale of their data holdings, highlight any ethical and legal requirements that need to be met, and enable planning for dissemination and reuse. We will also introduce the DCC’s DMPonline, a tool to help researchers write DMPs, which can be customised by institutions and integrated with other systems to simplify and enhance the management and reuse of data. In the second part of the presentation we will focus on making selected research data more visible for reuse and explore the potential value of local and national research data registries. In particular we will highlight the Jisc-funded RDRDS pilot to establish a UK national service that aggregates metadata relating to data collections held in research institutions and subject data centres. The session will conclude by exploring some of the opportunities we may collaboratively explore in facilitating the management, aggregation and reuse of research data

    The Strategy of the Commons: Modelling the Annual Cost of Successful ICT Services for European Research

    Get PDF
    The provision of ICT services for research is increasingly using Cloud services to complement the traditional federation of computing centres. Due to the complex funding structure and differences in the basic business model, comparing the cost-effectiveness of these options requires a new approach to cost assessment. This paper presents a cost assessment method addressing the limitations of the standard methods and some of the initial results of the study. This acts as an illustration of the kind of cost assessment issues high-utilisation rate ICT services should consider when choosing between different infrastructure options. The research is co-funded by the European Commission Seventh Framework Programme through the e-FISCAL project (contract number RI-283449)

    SciTech News Volume 70, No. 4 (2016)

    Get PDF
    Columns and Reports From the Editor 3 Division News Science-Technology Division 4 SLA Annual Meeting 2016 Report (S. Kirk Cabeen Travel Stipend Award recipient) 6 Reflections on SLA Annual Meeting (Diane K. Foster International Student Travel Award recipient) 8 SLA Annual Meeting Report (Bonnie Hilditch International Librarian Award recipient)10 Chemistry Division 12 Engineering Division 15 Reflections from the 2016 SLA Conference (SPIE Digital Library Student Travel Stipend recipient)15 Fundamentals of Knowledge Management and Knowledge Services (IEEE Continuing Education Stipend recipient) 17 Makerspaces in Libraries: The Big Table, the Art Studio or Something Else? (by Jeremy Cusker) 19 Aerospace Section of the Engineering Division 21 Reviews Sci-Tech Book News Reviews 22 Advertisements IEEE 17 WeBuyBooks.net 2
    corecore