1,987 research outputs found

    Interoperability and FAIRness through a novel combination of Web technologies

    Get PDF
    Data in the life sciences are extremely diverse and are stored in a broad spectrum of repositories ranging from those designed for particular data types (such as KEGG for pathway data or UniProt for protein data) to those that are general-purpose (such as FigShare, Zenodo, Dataverse or EUDAT). These data have widely different levels of sensitivity and security considerations. For example, clinical observations about genetic mutations in patients are highly sensitive, while observations of species diversity are generally not. The lack of uniformity in data models from one repository to another, and in the richness and availability of metadata descriptions, makes integration and analysis of these data a manual, time-consuming task with no scalability. Here we explore a set of resource-oriented Web design patterns for data discovery, accessibility, transformation, and integration that can be implemented by any general- or special-purpose repository as a means to assist users in finding and reusing their data holdings. We show that by using off-the-shelf technologies, interoperability can be achieved atthe level of an individual spreadsheet cell. We note that the behaviours of this architecture compare favourably to the desiderata defined by the FAIR Data Principles, and can therefore represent an exemplar implementation of those principles. The proposed interoperability design patterns may be used to improve discovery and integration of both new and legacy data, maximizing the utility of all scholarly outputs

    Inroads to Predict in Vivo Toxicology—An Introduction to the eTOX Project

    Get PDF
    There is a widespread awareness that the wealth of preclinical toxicity data that the pharmaceutical industry has generated in recent decades is not exploited as efficiently as it could be. Enhanced data availability for compound comparison (“read-across”), or for data mining to build predictive tools, should lead to a more efficient drug development process and contribute to the reduction of animal use (3Rs principle). In order to achieve these goals, a consortium approach, grouping numbers of relevant partners, is required. The eTOX (“electronic toxicity”) consortium represents such a project and is a public-private partnership within the framework of the European Innovative Medicines Initiative (IMI). The project aims at the development of in silico prediction systems for organ and in vivo toxicity. The backbone of the project will be a database consisting of preclinical toxicity data for drug compounds or candidates extracted from previously unpublished, legacy reports from thirteen European and European operation-based pharmaceutical companies. The database will be enhanced by incorporation of publically available, high quality toxicology data. Seven academic institutes and five small-to-medium size enterprises (SMEs) contribute with their expertise in data gathering, database curation, data mining, chemoinformatics and predictive systems development. The outcome of the project will be a predictive system contributing to early potential hazard identification and risk assessment during the drug development process. The concept and strategy of the eTOX project is described here, together with current achievements and future deliverables

    BlogForever D5.1: Design and Specification of Case Studies

    Get PDF
    This document presents the specification and design of six case studies for testing the BlogForever platform implementation process. The report explains the data collection plan where users of the repository will provide usability feedback through questionnaires as well as details of scalability analysis through the creation of specific log files analytics. The case studies will investigate the sustainability of the platform, that it meets potential users’ needs and that is has an important long term impact

    Metadata Schema for Traditional Knowledge

    Get PDF
    Approximately four hundred indigenous communities in Indonesia originally utilize their traditional knowledge for supporting their daily life. Because of many benefits of that knowledge, many stakeholders have started to collect and write it into a digital report. However, the digital report was still documented in the different format of metadata because there is no specific metadata schema for describing digital data of traditional knowledge. Moreover, the differences of metadata schema will make the difficult process of documenting, managing and disseminating this traditional knowledge. To overcome this problem, this work attempted to design specific metadata schema for a domain of traditional knowledge by utilizing metadata development methods, i.e., domain analysis, derivation analysis, system-centric analysis, user-centric analysis and resource-centric analysis. The selection of those methods based on literature review result toward research articles that presented about metadata development. As a result, this paper proposed metadata schema of traditional knowledge that consists of 37 metadata elements which are categorized into 6 metadata sections, i.e., supporting data, material, supporting tool, success story, knowledge source, and knowledge engineer
    • …
    corecore