24 research outputs found

    From Conceptualization to Implementation: FAIR Assessment of Research Data Objects

    Get PDF
    Funders and policy makers have strongly recommended the uptake of the FAIR principles in scientific data management. Several initiatives are working on the implementation of the principles and standardized applications to systematically evaluate data FAIRness. This paper presents practical solutions, namely metrics and tools, developed by the FAIRsFAIR project to pilot the FAIR assessment of research data objects in trustworthy data repositories. The metrics are mainly built on the indicators developed by the RDA FAIR Data Maturity Model Working Group. The tools’ design and evaluation followed an iterative process. We present two applications of the metrics: an awareness-raising self-assessment tool and an automated FAIR data assessment tool. Initial results of testing the tools with researchers and data repositories are discussed, and future improvements suggested including the next steps to enable FAIR data assessment in the broader research data ecosystem

    The TRUST Principles for digital repositories.

    Get PDF
    As information and communication technology has become pervasive in our society, we are increasingly dependent on both digital data and repositories that provide access to and enable the use of such resources. Repositories must earn the trust of the communities they intend to serve and demonstrate that they are reliable and capable of appropriately managing the data they hold. Following a year-long public discussion and building on existing community consensus1, several stakeholders, representing various segments of the digital repository community, have collaboratively developed and endorsed a set of guiding principles to demonstrate digital repository trustworthiness. Transparency, Responsibility, User focus, Sustainability and Technology: the TRUST Principles provide a common framework to facilitate discussion and implementation of best practice in digital preservation by all stakeholders.Proyecto de Enlace de Biblioteca

    CoreTrustSeal v3.0 In a Preservation and Community Context_20220913

    No full text
    In-Person Panel Recording: https://youtu.be/jmdVAJcCS9

    FAIR-IMPACT Introduction on exposing repository trustworthiness status and FAIR data assessments outcomes [Poster]

    Get PDF
    The FAIR-IMPACT project supports the implementation of FAIR-enabling practices, tools and services. Guidelines and a prototype on trustworthiness will showcase that exposing (meta)data as well as accompanying evidence and breaking up information silos, adds value to certification, discovery portals and assessment. This poster was presented at the 1st Conference on Research Data Infrastructure (CoRDI2023), organised by NFDI in Karlsruhe

    New Perspectives on Economic Modeling for Digital Curation

    No full text
    Society is increasingly dependent on the availability of digitalinformation assets however the resources that are available formanaging the assets over time (curating) are limited. As such, it isincreasingly vital that organizations are able to judge theeffectiveness of their investments into curation activities. Forthose responsible for digital curation, it is an ongoing challenge toensure that the assets remain valuable in a sustainable manner.Digital curation and preservation practices are still evolving andthey are not well aligned across different organizations anddifferent sectors. The lack of clear definitions and standardizationmakes it difficult to compare the costs and benefits of multiplecuration processes, which again impedes identification of goodpractice. This paper introduces a new perspective on modeling theeconomics of curation. It describes a framework of interrelatedmodels that represent different aspects of the economic lifecyclebased around curation. The framework includes a sustainabilitymodel, a cost and benefit model, a business model, and a costmodel. The framework provides a common vocabulary andclarifies the roles and responsibilities of managers with a demandfor curation of digital assets and suppliers of curation services andsolutions. Further, the framework reflects the context in whichmanagers operate and how this context influences their decision-making.This should enable managers to think through differentscenarios around the economics of curation and to analyze theimpact of different decisions to support strategic planning. Theframework is intended to serve as a basis for developing tools tohelp managers analyze the costs and benefits associated withcuration. The models are being developed and refined as part ofthe EU project 4C “Collaboration to Clarify the Cost of Curation”,which is bringing together and bridging existing knowledge,models and tools to create a better understanding of the economicsof curation

    Invitation to Join the OAIS Community Platform

    No full text
    In this poster, we describe an initiative to build a community resource around the OAIS standard

    Implementation and testing of an Authenticity Protocol on a Specific Domain.

    No full text
    In the original definition given in CASPAR, Authenticity Protocols (APs) are the procedures to be followed in order to assess the authenticity of specific type of Digital Resource (DR). The CASPAR definition is quite general and does not make reference to a specific authenticity management model. As part of the activities of APARSEN WP24 we have formalized an authenticity management model, which is based on the principle of performing controls and collecting authenticity evidence in connection to specific events of the DR lifecycle. This allows to trace back all the transformations the DR has undergone since its creation and that may have affected its authenticity. The model is complemented by a set of operational guidelines that allow to set up an Authenticity Management Policy, i.e. to identify the relevant transformations in the lifecycle and to specify which controls should be performed and which authenticity evidence should be collected in connection with these transformations. To formalize the policy we have indeed resorted to CASPAR's AP definition, and we have adapted and extended to integrate it in our authenticity management model. In our methodology the AP therefore becomes the procedure that is to be followed in connection with a given lifecycle event to perform the controls and to collect the AER as specified by the authenticity management policy. Accordingly, the original content of this deliverable, which was aimed at "implementing and testing an authenticity protocol on a specific domain", has been adapted and extended to encompass the whole scope of the authenticity evidence management guidelines. The current aim of the deliverable has therefore become to test the model and the guidelines at the operational level when dealing with the concrete problem of setting up or improving a LTDP repository in a given specific environment, to get to the definition of an adequate authenticity management policy. Moreover, instead of concentrating on a single environment, we have decided to extend the analysis to multiple test environments provided by APARSEN partners. Shifting to a practical ground and facing the actual problems that arise in the management of a repository has indeed been an important move to fill the gap that still divides the mostly theoretical results of the scientific community from the actual practices carried on in most repositories, and to reduce the fragmentation among the different approaches that prevents interoperability. And the case studies have proved the validity of this approach. On the one hand they have proved to be easily applied and well understood in all the test cases, and on the other hand the simple and yet rigorous concepts introduced by the model may provide a common ground for the management of authenticity evidence and for exchanging it among different systems. In at least one of the case studies, the guidelines have been applied to their full extent, i.e. from the preliminary analysis, to the identification of the relevant lifecycle events, to the detailed specification of the authenticity evidence to be collected, to the formal definition of the authenticity management policy, that is to the specification of the AP. In all cases, referring to the guidelines has provided valuable help, both in pointing out any weakness in the current practices and in providing a reasonable way to fix the problems

    Capability Maturity & Community Engagement Design Statement

    No full text
    This paper proposes a design approach to evaluating capability in particular areas of focus within an entity, and the level of adoption, practice and collaboration displayed related to policy, and standardisation. Understanding our levels of capability in particular areas and how these contribute to overall maturity allows us to understand our current status and to identify and invest in areas where we want to improve and monitor our progress. This applies to both self-assessment and external evaluation. The approach can be adjusted to address a range of entities including organisations, parts of organisations and component services. It should be possible to apply these approaches to repositories and other organisations providing data infrastructure (RDI) and also to organisations providing funding (RFO) or performing research (RPO). In addition to existing capability and maturity models there are several under development or review within the EOSC and trustworthy digital repository (TDR) space. The model designers need the freedom to develop and evolve independently, but recognize the risk to implementers of having a confusing range of non-interoperable and resource-intensive systems to choose from. This text reflects the evolving design statement being developed within the FAIRsFAIR4 project
    corecore