9 research outputs found

    Towards FAIR principles for research software

    Get PDF
    The FAIR Guiding Principles, published in 2016, aim to improve the findability, accessibility, interoperability and reusability of digital research objects for both humans and machines. Until now the FAIR principles have been mostly applied to research data. The ideas behind these principles are, however, also directly relevant to research software. Hence there is a distinct need to explore how the FAIR principles can be applied to software. In this work, we aim to summarize the current status of the debate around FAIR and software, as basis for the development of community-agreed principles for FAIR research software in the future. We discuss what makes software different from data with regard to the application of the FAIR principles, and which desired characteristics of research software go beyond FAIR. Then we present an analysis of where the existing principles can directly be applied to software, where they need to be adapted or reinterpreted, and where the definition of additional principles is required. Here interoperability has proven to be the most challenging principle, calling for particular attention in future discussions. Finally, we outline next steps on the way towards definite FAIR principles for research software

    BioSWR – Semantic Web Services Registry for Bioinformatics

    No full text

    Add/Remove SAWSDL reference via SPARQL UPDATE query.

    No full text
    <p>Add/Remove SAWSDL reference via SPARQL UPDATE query.</p

    Find all registered Web services via SPARQL DESCRIBE query.

    No full text
    <p>Find all registered Web services via SPARQL DESCRIBE query.</p

    BioSWR REST Web services API.

    No full text
    <p>BioSWR REST Web services API.</p

    Data infrastructures for AI in medical imaging: a report on the experiences of five EU projects

    Get PDF
    Abstract Artificial intelligence (AI) is transforming the field of medical imaging and has the potential to bring medicine from the era of ‘sick-care’ to the era of healthcare and prevention. The development of AI requires access to large, complete, and harmonized real-world datasets, representative of the population, and disease diversity. However, to date, efforts are fragmented, based on single–institution, size-limited, and annotation-limited datasets. Available public datasets (e.g., The Cancer Imaging Archive, TCIA, USA) are limited in scope, making model generalizability really difficult. In this direction, five European Union projects are currently working on the development of big data infrastructures that will enable European, ethically and General Data Protection Regulation-compliant, quality-controlled, cancer-related, medical imaging platforms, in which both large-scale data and AI algorithms will coexist. The vision is to create sustainable AI cloud-based platforms for the development, implementation, verification, and validation of trustable, usable, and reliable AI models for addressing specific unmet needs regarding cancer care provision. In this paper, we present an overview of the development efforts highlighting challenges and approaches selected providing valuable feedback to future attempts in the area. Key points • Artificial intelligence models for health imaging require access to large amounts of harmonized imaging data and metadata. • Main infrastructures adopted either collect centrally anonymized data or enable access to pseudonymized distributed data. • Developing a common data model for storing all relevant information is a challenge. • Trust of data providers in data sharing initiatives is essential. • An online European Union meta-tool-repository is a necessity minimizing effort duplication for the various projects in the area

    Butler enables rapid cloud-based analysis of thousands of human genomes (vol 79, pg 134, 2019)

    No full text
    An amendment to this paper has been published and can be accessed via a link at the top of the paper.status: publishe

    Butler enables rapid cloud-based analysis of thousands of human genomes

    No full text
    We present Butler, a computational tool that facilitates large-scale genomic analyses on public and academic clouds. Butler includes innovative anomaly detection and self-healing functions that improve the efficiency of data processing and analysis by 43% compared with current approaches. Butler enabled processing of a 725-terabyte cancer genome dataset from the Pan-Cancer Analysis of Whole Genomes (PCAWG) project in a time-efficient and uniform manner.status: publishe
    corecore