26 research outputs found

    The Virtual Metabolic Human database: integrating human and gut microbiome metabolism with nutrition and disease

    Get PDF
    A multitude of factors contribute to complex diseases and can be measured with ‘omics’ methods. Databases facilitate data interpretation for underlying mechanisms. Here, we describe the Virtual Metabolic Human (VMH, www.vmh.life) database encapsulating current knowledge of human metabolism within five interlinked resources ‘Human metabolism’, ‘Gut microbiome’, ‘Disease’, ‘Nutrition’, and ‘ReconMaps’. The VMH captures 5180 unique metabolites, 17 730 unique reactions, 3695 human genes, 255 Mendelian diseases, 818 microbes, 632 685 microbial genes and 8790 food items. The VMH’s unique features are (i) the hosting of the metabolic reconstructions of human and gut microbes amenable for metabolic modeling; (ii) seven human metabolic maps for data visualization; (iii) a nutrition designer; (iv) a user-friendly webpage and application-programming interface to access its content; (v) user feedback option for community engagement and (vi) the connection of its entities to 57 other web resources. The VMH represents a novel, interdisciplinary database for data interpretation and hypothesis generation to the biomedical community

    Quantifying ChIP-seq data:A spiking method providing an internal reference for sample-to-sample normalization

    Get PDF
    Chromatin immunoprecipitation followed by deep sequencing (ChIP-seq) experiments are widely used to determine, within entire genomes, the occupancy sites of any protein of interest, including, for example, transcription factors, RNA polymerases, or histones with or without various modifications. In addition to allowing the determination of occupancy sites within one cell type and under one condition, this method allows, in principle, the establishment and comparison of occupancy maps in various cell types, tissues, and conditions. Such comparisons require, however, that samples be normalized. Widely used normalization methods that include a quantile normalization step perform well when factor occupancy varies at a subset of sites, but may miss uniform genome-wide increases or decreases in site occupancy. We describe a spike adjustment procedure (SAP) that, unlike commonly used normalization methods intervening at the analysis stage, entails an experimental step prior to immunoprecipitation. A constant, low amount from a single batch of chromatin of a foreign genome is added to the experimental chromatin. This "spike" chromatin then serves as an internal control to which the experimental signals can be adjusted. We show that the method improves similarity between replicates and reveals biological differences including global and largely uniform changes

    Drug-target identification in COVID-19 disease mechanisms using computational systems biology approaches

    Get PDF
    IntroductionThe COVID-19 Disease Map project is a large-scale community effort uniting 277 scientists from 130 Institutions around the globe. We use high-quality, mechanistic content describing SARS-CoV-2-host interactions and develop interoperable bioinformatic pipelines for novel target identification and drug repurposing. MethodsExtensive community work allowed an impressive step forward in building interfaces between Systems Biology tools and platforms. Our framework can link biomolecules from omics data analysis and computational modelling to dysregulated pathways in a cell-, tissue- or patient-specific manner. Drug repurposing using text mining and AI-assisted analysis identified potential drugs, chemicals and microRNAs that could target the identified key factors.ResultsResults revealed drugs already tested for anti-COVID-19 efficacy, providing a mechanistic context for their mode of action, and drugs already in clinical trials for treating other diseases, never tested against COVID-19. DiscussionThe key advance is that the proposed framework is versatile and expandable, offering a significant upgrade in the arsenal for virus-host interactions and other complex pathologies

    IMP HTML reports

    No full text
    This file contains all the HTML reports generated by IMP for the analysis of datasets reported in the article

    IMP ver. 1.4 docker image

    No full text
    <p>This upload contains the the IMP docker image ver. 1.4</p> <p>For more information visit the IMP website: http://r3lab.uni.lu/web/imp/</p> <p>Documentation available at: http://r3lab.uni.lu/web/imp/</p

    IMP test data set

    No full text
    <p>This file contains the test data set used within the article:</p> <p><strong>IMP: a reproducible pipeline for reference-independent integrated metagenomic and metatranscriptomic analyses</strong></p> <p>Shaman Narayanasamy<sup>†</sup>, Yohan Jarosz<sup>†</sup>, Emilie E.L. Muller, CĂ©dric C. Laczny, Malte Herold, Anne Kaysen, Anna Heintz-Buschart, NicolĂĄs Pinel, Patrick May, and Paul Wilmes<sup>*</sup></p> <p>Preprint: http://biorxiv.org/content/early/2016/02/10/039263</p> <p>This test data set was used for benchmarking the run times of IMP. They are derived by selecting the first 5% of reads from a wastewater sludge microbial community dataset (see manuscript). Also included are the respective preprocessed FASTQ files such that IMP can be tested without running the preprocessing step. A README file inside the folder briefly describes the different FASTQ files contained in the folder.</p

    IMP small scale test dataset

    No full text
    <p>This file contains the test data set used within the article:</p> <p><strong>IMP: a reproducible pipeline for reference-independent integrated metagenomic and metatranscriptomic analyses</strong></p> <p>Shaman Narayanasamy<sup>†</sup>, Yohan Jarosz<sup>†</sup>, Emilie E.L. Muller, CĂ©dric C. Laczny, Malte Herold, Anne Kaysen, Anna Heintz-Buschart, NicolĂĄs Pinel, Patrick May, and Paul Wilmes<sup>*</sup></p> <p>Preprint: http://biorxiv.org/content/early/2016/02/10/039263</p> <p> </p> <p>This test data set was used for benchmarking the run times of IMP. They are derived by selecting the first 5% of reads from a wastewater sludge microbial community dataset (see manuscript and original publication of data: 10.1038/ncomms6603). Also included are the respective preprocessed FASTQ files such that IMP can be tested without running the preprocessing step. A README file inside the folder briefly describes the different FASTQ files contained in the folder.</p

    DAISY: A Data Information System for accountability under the General Data Protection Regulation

    No full text
    The new European legislation on data protection, namely, the General Data Protection Regulation (GDPR), has introduced comprehensive requirements for the documentation about the processing of personal data as well as informing the data subjects of its use. GDPR’s accountability principle requires institutions, projects, and data hubs to document their data processings and demonstrate compliance with the GDPR. In response to this requirement, we see the emergence of commercial data-mapping tools, and institutions creating GDPR data register with such tools. One shortcoming of this approach is the genericity of tools, and their process-based model not capturing the project-based, collaborative nature of data processing in biomedical research.We have developed a software tool to allow research institutions to comply with the GDPR accountability requirement and map the sometimes very complex data flows in biomedical research. By analysing the transparency and record-keeping obligations of each GDPR principle, we observe that our tool effectively meets the accountability requirement.The GDPR is bringing data protection to center stage in research data management, necessitating dedicated tools, personnel, and processes. Our tool, DAISY, is tailored specifically for biomedical research and can help institutions in tackling the documentation challenge brought about by the GDPR. DAISY is made available as a free and open source tool on Github. DAISY is actively being used at the Luxembourg Centre for Systems Biomedicine and the ELIXIR-Luxembourg data hub
    corecore