2,481 research outputs found

    Open Science principles for accelerating trait-based science across the Tree of Life

    Get PDF
    Synthesizing trait observations and knowledge across the Tree of Life remains a grand challenge for biodiversity science. Species traits are widely used in ecological and evolutionary science, and new data and methods have proliferated rapidly. Yet accessing and integrating disparate data sources remains a considerable challenge, slowing progress toward a global synthesis to integrate trait data across organisms. Trait science needs a vision for achieving global integration across all organisms. Here, we outline how the adoption of key Open Science principles-open data, open source and open methods-is transforming trait science, increasing transparency, democratizing access and accelerating global synthesis. To enhance widespread adoption of these principles, we introduce the Open Traits Network (OTN), a global, decentralized community welcoming all researchers and institutions pursuing the collaborative goal of standardizing and integrating trait data across organisms. We demonstrate how adherence to Open Science principles is key to the OTN community and outline five activities that can accelerate the synthesis of trait data across the Tree of Life, thereby facilitating rapid advances to address scientific inquiries and environmental issues. Lessons learned along the path to a global synthesis of trait data will provide a framework for addressing similarly complex data science and informatics challenges

    Solving multiple-criteria R&D project selection problems with a data-driven evidential reasoning rule

    Full text link
    In this paper, a likelihood based evidence acquisition approach is proposed to acquire evidence from experts'assessments as recorded in historical datasets. Then a data-driven evidential reasoning rule based model is introduced to R&D project selection process by combining multiple pieces of evidence with different weights and reliabilities. As a result, the total belief degrees and the overall performance can be generated for ranking and selecting projects. Finally, a case study on the R&D project selection for the National Science Foundation of China is conducted to show the effectiveness of the proposed model. The data-driven evidential reasoning rule based model for project evaluation and selection (1) utilizes experimental data to represent experts' assessments by using belief distributions over the set of final funding outcomes, and through this historic statistics it helps experts and applicants to understand the funding probability to a given assessment grade, (2) implies the mapping relationships between the evaluation grades and the final funding outcomes by using historical data, and (3) provides a way to make fair decisions by taking experts' reliabilities into account. In the data-driven evidential reasoning rule based model, experts play different roles in accordance with their reliabilities which are determined by their previous review track records, and the selection process is made interpretable and fairer. The newly proposed model reduces the time-consuming panel review work for both managers and experts, and significantly improves the efficiency and quality of project selection process. Although the model is demonstrated for project selection in the NSFC, it can be generalized to other funding agencies or industries.Comment: 20 pages, forthcoming in International Journal of Project Management (2019

    A survey of task-oriented crowdsourcing

    Get PDF
    Since the advent of artificial intelligence, researchers have been trying to create machines that emulate human behaviour. Back in the 1960s however, Licklider (IRE Trans Hum Factors Electron 4-11, 1960) believed that machines and computers were just part of a scale in which computers were on one side and humans on the other (human computation). After almost a decade of active research into human computation and crowdsourcing, this paper presents a survey of crowdsourcing human computation systems, with the focus being on solving micro-tasks and complex tasks. An analysis of the current state of the art is performed from a technical standpoint, which includes a systematized description of the terminologies used by crowdsourcing platforms and the relationships between each term. Furthermore, the similarities between task-oriented crowdsourcing platforms are described and presented in a process diagram according to a proposed classification. Using this analysis as a stepping stone, this paper concludes with a discussion of challenges and possible future research directions.This work is part-funded by ERDF-European Regional Development Fund through the COMPETE Programme (Operational Programme for Competitiveness) and by National Funds through the FCT-Fundacao para a Ciencia e a Tecnologia (Portuguese Foundation for Science and Technology) within the Ph.D. Grant SFRH/BD/70302/2010 and by the Projects AAL4ALL (QREN11495), World Search (QREN 13852) and FCOMP-01-0124-FEDER-028980 (PTDC/EEI-SII/1386/2012). The authors also thank Jane Boardman for her assistance proof reading the document.info:eu-repo/semantics/publishedVersio

    Searching Data: A Review of Observational Data Retrieval Practices in Selected Disciplines

    Get PDF
    A cross-disciplinary examination of the user behaviours involved in seeking and evaluating data is surprisingly absent from the research data discussion. This review explores the data retrieval literature to identify commonalities in how users search for and evaluate observational research data. Two analytical frameworks rooted in information retrieval and science technology studies are used to identify key similarities in practices as a first step toward developing a model describing data retrieval

    A Rigorous Uncertainty-Aware Quantification Framework Is Essential for Reproducible and Replicable Machine Learning Workflows

    Full text link
    The ability to replicate predictions by machine learning (ML) or artificial intelligence (AI) models and results in scientific workflows that incorporate such ML/AI predictions is driven by numerous factors. An uncertainty-aware metric that can quantitatively assess the reproducibility of quantities of interest (QoI) would contribute to the trustworthiness of results obtained from scientific workflows involving ML/AI models. In this article, we discuss how uncertainty quantification (UQ) in a Bayesian paradigm can provide a general and rigorous framework for quantifying reproducibility for complex scientific workflows. Such as framework has the potential to fill a critical gap that currently exists in ML/AI for scientific workflows, as it will enable researchers to determine the impact of ML/AI model prediction variability on the predictive outcomes of ML/AI-powered workflows. We expect that the envisioned framework will contribute to the design of more reproducible and trustworthy workflows for diverse scientific applications, and ultimately, accelerate scientific discoveries
    corecore