88 research outputs found

    BNDB – The Biochemical Network Database

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Technological advances in high-throughput techniques and efficient data acquisition methods have resulted in a massive amount of life science data. The data is stored in numerous databases that have been established over the last decades and are essential resources for scientists nowadays. However, the diversity of the databases and the underlying data models make it difficult to combine this information for solving complex problems in systems biology. Currently, researchers typically have to browse several, often highly focused, databases to obtain the required information. Hence, there is a pressing need for more efficient systems for integrating, analyzing, and interpreting these data. The standardization and virtual consolidation of the databases is a major challenge resulting in a unified access to a variety of data sources.</p> <p>Description</p> <p>We present the Biochemical Network Database (BNDB), a powerful relational database platform, allowing a complete semantic integration of an extensive collection of external databases. BNDB is built upon a comprehensive and extensible object model called BioCore, which is powerful enough to model most known biochemical processes and at the same time easily extensible to be adapted to new biological concepts. Besides a web interface for the search and curation of the data, a Java-based viewer (BiNA) provides a powerful platform-independent visualization and navigation of the data. BiNA uses sophisticated graph layout algorithms for an interactive visualization and navigation of BNDB.</p> <p>Conclusion</p> <p>BNDB allows a simple, unified access to a variety of external data sources. Its tight integration with the biochemical network library BN++ offers the possibility for import, integration, analysis, and visualization of the data. BNDB is freely accessible at <url>http://www.bndb.org</url>.</p

    Proteomics to go: Proteomatic enables the user-friendly creation of versatile MS/MS data evaluation workflows

    Get PDF
    We present Proteomatic, an operating system independent and user-friendly platform that enables the construction and execution of MS/MS data evaluation pipelines using free and commercial software. Required external programs such as for peptide identification are downloaded automatically in the case of free software. Due to a strict separation of functionality and presentation, and support for multiple scripting languages, new processing steps can be added easily

    Modeling metabolic networks in C. glutamicum: a comparison of rate laws in combination with various parameter optimization strategies

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>To understand the dynamic behavior of cellular systems, mathematical modeling is often necessary and comprises three steps: (1) experimental measurement of participating molecules, (2) assignment of rate laws to each reaction, and (3) parameter calibration with respect to the measurements. In each of these steps the modeler is confronted with a plethora of alternative approaches, e. g., the selection of approximative rate laws in step two as specific equations are often unknown, or the choice of an estimation procedure with its specific settings in step three. This overall process with its numerous choices and the mutual influence between them makes it hard to single out the best modeling approach for a given problem.</p> <p>Results</p> <p>We investigate the modeling process using multiple kinetic equations together with various parameter optimization methods for a well-characterized example network, the biosynthesis of valine and leucine in <it>C. glutamicum</it>. For this purpose, we derive seven dynamic models based on generalized mass action, Michaelis-Menten and convenience kinetics as well as the stochastic Langevin equation. In addition, we introduce two modeling approaches for feedback inhibition to the mass action kinetics. The parameters of each model are estimated using eight optimization strategies. To determine the most promising modeling approaches together with the best optimization algorithms, we carry out a two-step benchmark: (1) coarse-grained comparison of the algorithms on all models and (2) fine-grained tuning of the best optimization algorithms and models. To analyze the space of the best parameters found for each model, we apply clustering, variance, and correlation analysis.</p> <p>Conclusion</p> <p>A mixed model based on the convenience rate law and the Michaelis-Menten equation, in which all reactions are assumed to be reversible, is the most suitable deterministic modeling approach followed by a reversible generalized mass action kinetics model. A Langevin model is advisable to take stochastic effects into account. To estimate the model parameters, three algorithms are particularly useful: For first attempts the settings-free Tribes algorithm yields valuable results. Particle swarm optimization and differential evolution provide significantly better results with appropriate settings.</p

    HLA Ligand Atlas: a benign reference of HLA-presented peptides to improve T-cell-based cancer immunotherapy

    Full text link
    BACKGROUND The human leucocyte antigen (HLA) complex controls adaptive immunity by presenting defined fractions of the intracellular and extracellular protein content to immune cells. Understanding the benign HLA ligand repertoire is a prerequisite to define safe T-cell-based immunotherapies against cancer. Due to the poor availability of benign tissues, if available, normal tissue adjacent to the tumor has been used as a benign surrogate when defining tumor-associated antigens. However, this comparison has proven to be insufficient and even resulted in lethal outcomes. In order to match the tumor immunopeptidome with an equivalent counterpart, we created the HLA Ligand Atlas, the first extensive collection of paired HLA-I and HLA-II immunopeptidomes from 227 benign human tissue samples. This dataset facilitates a balanced comparison between tumor and benign tissues on HLA ligand level. METHODS Human tissue samples were obtained from 16 subjects at autopsy, five thymus samples and two ovary samples originating from living donors. HLA ligands were isolated via immunoaffinity purification and analyzed in over 1200 liquid chromatography mass spectrometry runs. Experimentally and computationally reproducible protocols were employed for data acquisition and processing. RESULTS The initial release covers 51 HLA-I and 86 HLA-II allotypes presenting 90,428 HLA-I- and 142,625 HLA-II ligands. The HLA allotypes are representative for the world population. We observe that immunopeptidomes differ considerably between tissues and individuals on source protein and HLA-ligand level. Moreover, we discover 1407 HLA-I ligands from non-canonical genomic regions. Such peptides were previously described in tumors, peripheral blood mononuclear cells (PBMCs), healthy lung tissues and cell lines. In a case study in glioblastoma, we show that potential on-target off-tumor adverse events in immunotherapy can be avoided by comparing tumor immunopeptidomes to the provided multi-tissue reference. CONCLUSION Given that T-cell-based immunotherapies, such as CAR-T cells, affinity-enhanced T cell transfer, cancer vaccines and immune checkpoint inhibition, have significant side effects, the HLA Ligand Atlas is the first step toward defining tumor-associated targets with an improved safety profile. The resource provides insights into basic and applied immune-associated questions in the context of cancer immunotherapy, infection, transplantation, allergy and autoimmunity. It is publicly available and can be browsed in an easy-to-use web interface at https://hla-ligand-atlas.org

    A novel algorithm for detecting differentially regulated paths based on gene set enrichment analysis

    Get PDF
    Motivation: Deregulated signaling cascades are known to play a crucial role in many pathogenic processes, among them are tumor initiation and progression. In the recent past, modern experimental techniques that allow for measuring the amount of mRNA transcripts of almost all known human genes in a tissue or even in a single cell have opened new avenues for studying the activity of the signaling cascades and for understanding the information flow in the networks

    A proteomics sample metadata representation for multiomics integration and big data analysis

    Get PDF
    The amount of public proteomics data is rapidly increasing but there is no standardized format to describe the sample metadata and their relationship with the dataset files in a way that fully supports their understanding or reanalysis. Here we propose to develop the transcriptomics data format MAGE-TAB into a standard representation for proteomics sample metadata. We implement MAGE-TAB-Proteomics in a crowdsourcing project to manually curate over 200 public datasets. We also describe tools and libraries to validate and submit sample metadata-related information to the PRIDE repository. We expect that these developments will improve the reproducibility and facilitate the reanalysis and integration of public proteomics datasets.publishedVersio

    Challenges and practices in promoting (ageing) employees working career in the health care sector – case studies from Germany, Finland and the UK

    Get PDF
    Background The health and social care sector (HCS) is currently facing multiple challenges across Europe: against the background of ageing societies, more people are in need of care. Simultaneously, several countries report a lack of skilled personnel. Due to its structural characteristics, including a high share of part-time workers, an ageing workforce, and challenging working conditions, the HCS requires measures and strategies to deal with these challenges. Methods This qualitative study analyses if and how organisations in three countries (Germany, Finland, and the UK) report similar challenges and how they support longer working careers in the HCS. Therefore, we conducted multiple case studies in care organisations. Altogether 54 semi-structured interviews with employees and representatives of management were carried out and analysed thematically. Results Analysis of the interviews revealed that there are similar challenges reported across the countries. Multiple organisational measures and strategies to improve the work ability and working life participation of (ageing) workers were identified. We identified similar challenges across our cases but different strategies in responding to them. With respect to the organisational measures, our results showed that the studied organisations did not implement any age-specific management strategies but realised different reactive and proactive human relation measures aiming at maintaining and improving employees’ work ability (i.e., health, competence and motivation) and longer working careers. Conclusions Organisations within the HCS tend to focus on the recruitment of younger workers and/or migrant workers to address the current lack of skilled personnel. The idea of explicitly focusing on ageing workers and the concept of age management as a possible solution seems to lack awareness and/or popularity among organisations in the sector. The concept of age management offers a broad range of measures, which could be beneficial for both, employees and employers/organisations. Employees could benefit from a better occupational well-being and more meaningful careers, while employers could benefit from more committed employees with enhanced productivity, work ability and possibly a longer career

    qcML: an exchange format for quality control metrics from mass spectrometry experiments.

    Get PDF
    Quality control is increasingly recognized as a crucial aspect of mass spectrometry based proteomics. Several recent papers discuss relevant parameters for quality control and present applications to extract these from the instrumental raw data. What has been missing, however, is a standard data exchange format for reporting these performance metrics. We therefore developed the qcML format, an XML-based standard that follows the design principles of the related mzML, mzIdentML, mzQuantML, and TraML standards from the HUPO-PSI (Proteomics Standards Initiative). In addition to the XML format, we also provide tools for the calculation of a wide range of quality metrics as well as a database format and interconversion tools, so that existing LIMS systems can easily add relational storage of the quality control data to their existing schema. We here describe the qcML specification, along with possible use cases and an illustrative example of the subsequent analysis possibilities. All information about qcML is available at http://code.google.com/p/qcml
    corecore