14 research outputs found

    Comment opérationnaliser et évaluer la prise en compte du concept ‘FAIR' dans le partage des données: Vers une grille simplifiée d’évaluation du respect des critères FAIR.

    Get PDF
    National audienceIndexed identifier ? Identification Are each data/dataset identified by an indexed and independant identifier ? Persistent metadata / data link ? Metadata traceability Are the metadata linked to the dataset through a persistent identifier? Metadata & authority linked ? Metadata traceability Are the metadata of each dataset linked to a unique authority (responsible for the datasets at a given time)? Unique, global, persistent ID? Identification Are the data identifiers unique, global and persistent ? Are the data identifiers unique, global and persistent ? Datasets linked to authority ? Metadata traceability Are all datasets linked to an authority (legal entity) through a unique and persistent identifier over time (e.g. institution, association or established body)? In case of a legal reuse restriction (such as personal data, state and public security, national defense secret, confidentiality of external relations, information systems security, secrets in industrial and commercial matters) , is the restriction properly justified?SHARC (SHAring Reward & Credit) est un groupe d’intérêt scientifique interdisciplinaire créé dans le cadre de RDA (Research Data Alliance) dans le but de faciliter le partage des données de recherche (et des ressources) par la valorisation de l’ensemble des activités pré-requises à ce partage, tout au long du cycle de vie des données. Dans ce cadre, un sous-groupe de travail SHARC élabore des grilles d’évaluation des chercheurs afin de mesurer leur niveau de prise en compte des principes FAIR dans la gestion de leurs données.La grille d’évaluation présentée dans ce poster est destinée à être complétée par tout scientifique produisant et / ou utilisant des données. Il s'agit d'un résumé d'une grille d'évaluation plus étendue conçue pour un partage optimal des données (non encore mise en œuvre pour le moment par la plupart des scientifiques).L'évaluation est basée sur les critères de conformité FAIR. Pour remplir cet objectif, la grille affiche le minimum de critères qui doivent absolument être appliqués par les chercheurs pour attester de leur pratique FAIR. Ces critères sont organisés en 5 groupes: «Motivations de partage»; "Trouvable", "Accessible", "Interopérable" et "Réutilisable". Pour chaque critère, 4 degrés d’évaluation sont proposés ("Jamais / Non évaluable"; "Si obligatoire"; "Parfois"; "Toujours"). Au moins un degré mais un seul doit être sélectionné par critère. L'évaluation doit être effectuée pour chaque catégorie F / A / I / R; L'évaluation finale est la somme de chaque degré coché rapportée au nombre total de critères dans chaque catégorie F / A / I / R. Des règles d'interprétation prenant en compte les «motivations du partage» sont proposées

    Impact de la mise en place d'un référentiel dans une population de médecins régulateurs (étude d'évaluation des pratiques sur la gestion de l'infection urinaire à L'Armel en Midi-Pyrénées)

    No full text
    Objectif: évaluer les pratiques diagnostiques et thérapeutiques des médecins régulateurs dans un but de les améliorer et d'uniformiser une réponse de qualité à la demande de soins en se basant sur l exemple de la prise en charge de l'infection urinaire. Méthode: audit de pratique sur 120 dossiers avant et autant après mise en place d un référentiel, réalisé au sein de l ARMEL, centre de régulation de PDS géré par des médecins généralistes pour les médecins libéraux adhérents de Midi-Pyrénées. Résultats: modification du critère principal (antibiothérapie des cystites aiguës) dans le sens d'une amélioration conforme aux recommandations: prescription d une antibiothérapie de première intention dans +33% en VA +94% en VR (avec p=0.00009) et réduction des prescriptions inappropriées de -17% en VA ou -57% en VR (p=0.01). Parmi les critères secondaires évalués: la qualité de l'interrogatoire avec un entretien plus systématique, symptôme fièvre recherché dans +33% des cas en VR, douleurs lombaires dans +56% des cas en VR. Impact: une amélioration globale et une uniformisation des pratiques peut être attendue par la rédaction de fiches d'aide à la régulation créées par des groupes de travail, idéalement informatisées et intégrées au logiciel de régulation.TOULOUSE3-BU Santé-Centrale (315552105) / SudocSudocFranceF

    From Raw Biodiversity Data to Indicators, Boosting Products Creation, Integration and Dissemination: French BON FAIR initiatives and related informatics solutions

    No full text
    Most biodiversity research aims at understanding the states and dynamics of biodiversity and ecosystems. To do so, biodiversity research increasingly relies on the use of digital products and services such as raw data archiving systems (e.g. structured databases or data repositories), ready-to-use datasets (e.g. cleaned and harmonized files with normalized measurements or computed trends) as well as associated analytical tools (e.g. model scripts in Github). Several world-wide initiatives facilitate the open access to biodiversity data, such as the Global Biodiversity Information Facility (GBIF) or GenBank, Predicts etc. Although these pave the way towards major advances in biodiversity research, they also typically deliver data products that are sometimes poorly informative as they fail to capture the genuine ecological information they intend to grasp. In other words, access to ready-to-use aggregated data products may sacrifice ecological relevance for data harmonization, resulting in over-simplified, ill-advised standard formats. This is singularly true when the main challenge is to match complementary data (large diversity of measured variables, integration of different levels of life organizations etc.) collected with different requirements and scattered in multiple databases. Improving access to raw data, and meaningful detailed metadata and analytical tools associated with standardized workflows is critical to maintain and maximize the generic relevance of ecological data. Consequently, advancing the design of digital products and services is essential for interoperability while also enhancing reproducibility and transparency in biodiversity research. To go further, a minimal common framework organizing biodiversity observation and data organization is needed. In this regard, the Essential Biodiversity Variable (EBV) concept might be a powerful way to boost progress toward this goal as well as to connect research communities worldwide. As a national Biodiversity Observation Network (BON) node, the French BON is currently embodied by a national research e-infrastructure called "Pôle national de données de biodiversité" (PNDB, formerly ECOSCOPE), aimed at simultaneously empowering the quality of scientific activities and promoting networking within the scientific community at a national level. Through the PNDB, the French BON is working on developing biodiversity data workflows oriented toward end services and products, both from and for a research perspective. More precisely, the two pillars of the PNDB are a metadata portal and a workflow-oriented web platform dedicated to the access of biodiversity data and associated analytical tools (Galaxy-E). After four years of experience, we are now going deeper into metadata specification, dataset descriptions and data structuring through the extensive use of Ecological Metadata Language (EML) as a pivot format. Moreover, we evaluate the relevance of existing tools such as Metacat/Morpho and DEIMS-SDR (Dynamic Ecological Information Management System - Site and dataset registry) in order to ensure a link with other initiatives like Environmental Data Initiative, DataOne and Long-Term Ecological Research related observation networks. Regarding data analysis, an open-source Galaxy-E platform was launched in 2017 as part of a project targeting the design of a citizen science observation system in France ("65 Millions d'observateurs"). Here, we propose to showcase ongoing French activities towards global challenges related to biodiversity information and knowledge dissemination. We particularly emphasize our focus on embracing the FAIR (findable, accessible, interoperable and reusable) data principles Wilkinson et al. 2016 across the development of the French BON e-infrastructure and the promising links we anticipate for operationalizing EBVs. Using accessible and transparent analytical tools, we present the first online platform allowing the performance of advanced yet user-friendly analyses of biodiversity data in a reproducible and shareable way using data from various data sources, such as GBIF, Atlas of Living Australia (ALA), eBIRD, iNaturalist and environmental data such as climate data

    Analysis on the Graph Techniques for Data-mining and Visualization of Heterogeneous Biodiversity Data Sets

    No full text
    International audienceExtisting biodiversity databases contain an abundance of information. To turn such information into knowledge , it is necessary to address several information-model issues. Biodiversity data are collected for various scientific objectives, often even without clear preliminary objectives, may follow different taxonomy standards and organization logic, and be held in multiple file formats and utilising a variety of database technologies. This paper presents a graph catalogue model for the metadata management of biodiversity databases. It explores the possible operation of data mining and visualization to guide the analysis of heterogeneous biodiversity data. In particular, we would propose contributions to the problems of (1) the analysis of heterogeneous distributed data found across different databases, (2) the identification of matches and approximations between data sets, and (3) the identificaton of relationships between various databases. This paper describes a proof of concept of an infrastructure testbed and its basic operations, presenting an evaluation of the resulting system in comparison with the ideal expectations of the ecologist

    Projets IndexMed: nouveaux outils utilisant la base de données CIGESMED sur les habitats Coralligènes pour l'indexation, la visualisation et l'exploration de données basés sur les graphes

    No full text
    http://www.iemss.org/sites/iemss2016/International audienceData produced by the SeasEra CIGESMED project (Coralligenous based Indicators to evaluate and monitor the "Good Environmental Status" of the MEDiterranean coastal waters) have a high potential to be used by several stakeholders involved in environmental management. A new consortium called IndexMed whose task is to index Mediterranean biodiversity data, makes it possible to build graphs in order to analyse the CIGESMED data and develop new ways for data mining of coralligenous data. This paper presents the prototypes under development that test the ability of graphs dataBases and tools to connect biodiversity objects with non-centralized data. This project explores the ability of two scientific communities to work together. The uses of data from coralligenous habitat demonstrate the prototype functionalities and introduce new perspectives to analyse environmental and societal responses.Les données produites par le projet SeasEra CIGESMED (Indicateurs d'évaluation des habitats coralligènes et de surveillance du «Bon état écologique» des eaux côtières méditerranéennes) ont un fort potentiel d'utilisation par de nombreux acteurs impliqués dans la gestion environnementale. Un nouveau consortium appelé IndexMed dont la tâche est d'indexer les données sur la biodiversité méditerranéenne, permet de construire des graphes dans l'objectif d'analyser les données CIGESMED et développer de nouvelles méthodes pour l'extraction de données sur les habitats coralligènes. Cet article présente les prototypes en cours de développement qui testent la capacité des bases de données "graphe" et des outils permettant de relier des objets de biodiversité à partir de données non centralisées. Ce projet explore la capacité de deux communautés scientifiques à travailler ensemble. Les utilisations des données provenant de l'habitat coralligène permettent de présenter les fonctionnalités du prototype et proposent de nouvelles approches pour analyser les réponses environnementales et sociétales

    Comment évaluer la FAIRness pour améliorer les processus de récompense du partage de la donnée? Une étape au travers d'une grille d'évaluation plus exhaustive

    Get PDF
    Le poster a été déposé dans ZenodoInternational audienceThe SHARC (SHAring Reward & Credit) interest group (IG) is an interdisciplinary group set up in the framework of RDA (Research Data Alliance) to improve crediting and rewarding mechanisms in the sharing process throughout the data life cycle. Notably, one of the objectives is to promote data sharing activities in research assessment schemes at national and European levels. To this aim, the RDA-SHARC IG is developing assessment grids using criteria to establish if data are compliant to the FAIR principles (findable /accessible / interoperable / reusable).The grid is aiming to be extensive, generic and trans-disciplinary. It is meant to be used by evaluators to assess the quality of the sharing practice of the researcher/scientist over a given period, taking into account the means & support available over that period. The grid displays a mind-mapped tree-graph structure based on previous works on FAIR data management (Reymonet et al., 2018; Wilkinson et al., 2016; Wilkinson et al., 2018; and E.U.Guidelines about FAIRness Data Management Plans). The criteria used are based on the work from FORCE 11*, and the Open Science Career Assessment Matrix designed by the EC Working group on Rewards under Open science. The criteria are organised in 5 clusters: ‘Motivations for sharing’; ‘Findable’, ‘Accessible’, ‘Interoperable’ and ‘Reusable’. For each criterion, 4 graduations are proposed (‘Never / Not Assessable’; ‘If mandatory’; ‘Sometimes’; ‘Always’). Only one value must be selected per criterion. Evaluation should be done by cluster; the final overall assessment will be based on the sum of the number of each ticked value / total number of criteria in each cluster; the ‘motivations for sharing’ should be appreciated qualitatively in the final interpretation. The final goals are to develop a graduated assessment of the researcher FAIRness literacy and help identifying needs to build FAIRness guidelines to improve the sharing capacity of researchers

    Operationalizing and evaluating the FAIRness concept for a good quality of data sharing in Research: the RDA-SHARC-IG (SHAring Rewards and Credit Interest Group)

    Get PDF
    National audienceThe RDA-SHARC (SHAring Reward & Credit) interest group is an interdisciplinary volunteer member-based group set up as part of RDA (Research Data Alliance) to unpack and improve crediting and rewarding mechanisms in the sharing process throughout the data life cycle. Background and objectives of this group are reported here. Notably, one of the objectives is to promote the inclusion of data sharing activities in the research (& researchers) assessment scheme at national and European levels. To this aim, the RDA-SHARC-IG is developing two assessment grids using criteria to establish if data are compliant to the F.A.I.R principles (findable /accessible / interoperable / reusable) based on previous works on FAIR data management (Reymonet et al., 2018; Wilkinson et al., 2018; and E.U.Guidelines*): 1/ The self-assessment grid to be used by a scientist as a ‘checklist’ to identify her/his own activities and to pinpoint the hurdles that hinder efficient sharing and reuse of his/her data by all potential users. 2/ The two-level grid (quick/extensive) to be used by the evaluator to assess the quality of the researcher/scientist sharing practice, over a given period, taking into account the means & support available over that period. Assessment criteria are classified according their importance with regards to FAIRness (essential / recommended / desirable) meanwhile good practices are recommended for critical steps. To implement a highly fair assessment of the sharing process, appropriate criteria must be selected in order to design optimal generic assessment grids. This process requires participation, time and input from volunteer scientists data producers/users from various fields
    corecore