20 research outputs found

    Aginfra Plus D7.1 - Food Security User-Driven Requirements & Use Cases

    No full text
    This document introduces the domain of food security. It focuses on the analysis of the requirements for the AGINFRA PLUS use cases related to High Throughput Phenotyping and the definition of the processes to be activated for fulfilling these requirements, so that the specific use cases are implemented in the AGINFRA PLUS project. It defines the involved community and stakeholders and provides a set of typical “personas”. Moreover, different use cases related to the identified personas are described, as well as the specific data, semantics and analytics and processing involved in the implementation

    Le DOI dans le contexte des UE/IE

    No full text
    National audienc

    AGINFRA PLUS D7.3 - Food Security Community-Centred Assessment PlanAGINFRA PLUS

    No full text
    This Community-centered Assessment Plan describes a detailed plan regarding the procedures to be carried out for assessing the effectiveness of the AGINFRA PLUS paradigm for research in Food Security communities. It defines the objectives of the pilot trials and their assessment. Is also defines the different actions defined in the piloting scheme including the organization of restricted or wider demonstrations and hands-on events, activation of networks, expert sessions, etc. Furthermore, the task will carry out the pilot execution and evaluation activities relative to Food Security foreseen in Task 7.1 and following the established plan and evaluation methodology

    Le portail Data Inra et les services associés

    No full text
    National audienc

    WheatVIVO: Integrating diverse data sources for an international perspective on wheat funding and research activities

    No full text
    <div>WheatVIVO is being developed by The Wheat Initiative[1] as a showcase of information about researchers and projects across the global public-private wheat community. WheatVIVO aims to serve the needs of researchers looking to develop collaborations, students and postdocs seeking to identify labs in which they would like to work, and policy makers and funding agencies working to understand better the research priorities in different countries. WheatVIVO harvests linked open data provided by existing VIVO installations as well as various non-RDF sources. While data integration is fully automated, WheatVIVO also makes it possible for non-programmers to configure the retrieval of data, resolution of common entities and merging of possibly contradictory or duplicate data, as well as to provide manual corrections. The VIVO software is extended not only in the public website but also in a separate application where administrators can view data with their provenance information and set configuration options such as the times and dates at which different data sources should be harvested and the order in which sources should be used when they offer data about the same entity. Through the admin application, Wheat Initiative personnel can add and edit patterns and associated weightings for automatically matching entities across the sources, and iteratively test the resulting merged data in a staging VIVO before scheduling the merge process to run automatically at desired intervals. The WheatVIVO website allows visitors to flag errors discovered in the data and to provide feedback to project staff who are then prompted either to review the associated matching rules or to forward feedback to the original data providers. Statistics are recorded about how frequently data from different sources are viewed in order to help original providers quantify the benefit of making their data open and available. VIVO’s browsing and visualization capabilities are adapted to highlight the international aspects of coauthorship and project participation. Challenges include issues of data normalization and comparison, such as where funding cycles and salary support differ across countries, as well as the integration of open but unstructured data. It is also anticipated that improvements to the data correction and feedback interfaces will be identified after the system’s production launch in late spring 2017, and that future updates will permit the data ingest processes to learn from these corrections to prevent recurrence of errors. The WheatVIVO admin application, portal and core data ingest code are being developed by private contractor Ontocale SRL. The INRA DIST[2] team contributes to the project by developing connectors to download data from data sources. WheatVIVO code is open source and available on GitHub[3]. The INRA DIST project leader oversees the development of the project together with the Wheat Initiative International Scientific Coordinator. [1] http://www.wheatinitiative.org [2] Institut National de la Recherche Agronomique - Délégation Information Scientifique et Technique [3] http://github.com/wheatvivo</div><div><br></div

    Principles of data governance for research organizations -INRAE's approach

    No full text
    International audienceINRA and IRSTEA, 2 French research organizations, have joined together to become INRAE, a world class institute for research on agriculture, food, and environment, mainly funded by public resources. INRAE has recently set out its principles of "data governance", to cover all the processes required to manage and enhance data sharing on the basis of ethics, legal, economic, technical, and scientific policy criteria. Roles and responsibilities of the actors are defined to ensure a smooth and sustainable decisional process. A "data governance" charter has been written to indicate who decides, which data, and how they are opened according to good practices such as the FAIR principles. With this document all scientists, administrative support, and data managers involved in the data life cycle have a shared understanding of the rationale guiding the global data framework. We address here the question of "what can be a data governance framework at an institutional level to support both data management, sharing, and reuse. We've defined 4 key principles at the foundation of data governance: (1) data must be shared and reused while observing the values of science, (2) data must be managed in order to make it F.A.I.R., (3) data should be "as open as possible, as closed as necessary", and (4) open data contributes to innovation and value creation for society. The key rationale of these four principles is that they build a consistent "system" for guiding the decision, as it requires a careful evaluation of all of them. These principles were complemented with a decision-making chain including the main actors. This "data governance schema" has been built thanks to a participative approach. It is the result of several rounds of discussion and consultation with different groups of people having different views on data and different levels of responsibilities, including interviews of scientific project leaders on a dozen significant case studies. It took 1 year to establish our first guidelines. This schema is today in the process of implementation. Improvements will certainly come after the first real-life usage. However, we believe that these four principles of governance are generic enough to be applied to many research organisms, only the internal organization or process should differ according to the culture and the regulation context
    corecore