23 research outputs found

    Linking disaster risk reduction, climate change, and the sustainable development goals

    Get PDF
    PURPOSE: The purpose of this paper is to better link the parallel processes yielding international agreements on climate change, disaster risk reduction, and sustainable development. DESIGN/METHODOLOGY/APPROACH: This paper explores how the Paris Agreement for climate change relates to disaster risk reduction and sustainable development, demonstrating too much separation amongst the topics. A resolution is provided through placing climate change within wider disaster risk reduction and sustainable development contexts. FINDINGS: No reason exists for climate change to be separated from wider disaster risk reduction and sustainable development processes. RESEARCH LIMITATIONS/IMPLICATIONS: Based on the research, a conceptual approach for policy and practice is provided. Due to entrenched territory, the research approach is unlikely to be implemented. ORIGINALITY/VALUE: Using a scientific basis to propose an ending for the silos separating international processes for climate change, disaster risk reduction, and sustainable development

    Application of integrated transcriptomic, proteomic and metabolomic profiling for the delineation of mechanisms of drug induced cell stress

    Get PDF
    International audience; High content omic techniques in combination with stable human in vitro cell culture systems have the potential to improve on current pre-clinical safety regimes by providing detailed mechanistic information of altered cellular processes. Here we investigated the added benefit of integrating transcriptomics, proteomics and metabolomics together with pharmacokinetics for drug testing regimes. Cultured human renal epithelial cells (RPTEC/TERT1) were exposed to the nephrotoxin Cyclosporine A (CsA) at therapeutic and supratherapeutic concentrations for 14 days. CsA was quantified in supernatants and cellular lysates by LC-MS/MS for kinetic modeling. There was a rapid cellular uptake and accumulation of CsA, with a non-linear relationship between intracellular and applied concentrations. CsA at 15 µM induced mitochondrial disturbances and activation of the Nrf2-oxidative-damage and the unfolded protein-response pathways. All three omic streams provided complementary information, especially pertaining to Nrf2 and ATF4 activation. No stress induction was detected with 5 µM CsA; however, both concentrations resulted in a maximal secretion of cyclophilin B. The study demonstrates for the first time that CsA-induced stress is not directly linked to its primary pharmacology. In addition we demonstrate the power of integrated omics for the elucidation of signaling cascades brought about by compound induced cell stress

    Integrative functional genomics decodes herpes simplex virus 1

    Get PDF
    Funder: Alexander von Humboldt-Stiftung (Alexander von Humboldt Foundation); doi: https://doi.org/10.13039/100005156Abstract: The predicted 80 open reading frames (ORFs) of herpes simplex virus 1 (HSV-1) have been intensively studied for decades. Here, we unravel the complete viral transcriptome and translatome during lytic infection with base-pair resolution by computational integration of multi-omics data. We identify a total of 201 transcripts and 284 ORFs including all known and 46 novel large ORFs. This includes a so far unknown ORF in the locus deleted in the FDA-approved oncolytic virus Imlygic. Multiple transcript isoforms expressed from individual gene loci explain translation of the vast majority of ORFs as well as N-terminal extensions (NTEs) and truncations. We show that NTEs with non-canonical start codons govern the subcellular protein localization and packaging of key viral regulators and structural proteins. We extend the current nomenclature to include all viral gene products and provide a genome browser that visualizes all the obtained data from whole genome to single-nucleotide resolution

    Quantification and simulation of liquid chromatography-mass spectrometry data

    No full text
    Computational mass spectrometry is a fast evolving field that has attracted increased attention over the last couple of years. The performance of software solutions determines the success of analysis to a great extent. New algorithms are required to reflect new experimental procedures and deal with new instrument generations. One essential component of algorithm development is the validation (as well as comparison) of software on a broad range of data sets. This requires a gold standard (or so-called ground truth), which is usually obtained by manual annotation of a real data set. Comprehensive manually annotated public data sets for mass spectrometry data are labor- intensive to produce and their quality strongly depends on the skill of the human expert. Some parts of the data may even be impossible to annotate due to high levels of noise or other ambiguities. Furthermore, manually annotated data is usually not available for all steps in a typical computational analysis pipeline. We thus developed the most comprehensive simulation software to date, which allows to generate multiple levels of ground truth and features a plethora of settings to reflect experimental conditions and instrument settings. The simulator is used to generate several distinct types of data. The data are subsequently employed to evaluate existing algorithms. Additionally, we employ simulation to determine the influence of instrument attributes and sample complexity on the ability of algorithms to recover information. The results give valuable hints on how to optimize experimental setups. Furthermore, this thesis introduces two quantitative approaches, namely a decharging algorithm based on integer linear programming and a new workflow for identification of differentially expressed proteins for a large in vitro study on toxic compounds. Decharging infers the uncharged mass of a peptide (or protein) by clustering all its charge variants. The latter occur frequently under certain experimental conditions. We employ simulation to show that decharging is robust against missing values even for high complexity data and that the algorithm outperforms other solutions in terms of mass accuracy and run time on real data. The last part of this thesis deals with a new state-of-the-art workflow for protein quantification based on isobaric tags for relative and absolute quantitation (iTRAQ). We devise a new approach to isotope correction, propose an experimental design, introduce new metrics of iTRAQ data quality, and confirm putative properties of iTRAQ data using a novel approach. All tools developed as part of this thesis are implemented in OpenMS, a C++ library for computational mass spectrometry

    Quantifizierung und Simulation von Daten aus der FlĂĽssigchromatographie mit Massenspektrometrie-Kopplung

    No full text
    Computational mass spectrometry is a fast evolving field that has attracted increased attention over the last couple of years. The performance of software solutions determines the success of analysis to a great extent. New algorithms are required to reflect new experimental procedures and deal with new instrument generations. One essential component of algorithm development is the validation (as well as comparison) of software on a broad range of data sets. This requires a gold standard (or so-called ground truth), which is usually obtained by manual annotation of a real data set. Comprehensive manually annotated public data sets for mass spectrometry data are labor- intensive to produce and their quality strongly depends on the skill of the human expert. Some parts of the data may even be impossible to annotate due to high levels of noise or other ambiguities. Furthermore, manually annotated data is usually not available for all steps in a typical computational analysis pipeline. We thus developed the most comprehensive simulation software to date, which allows to generate multiple levels of ground truth and features a plethora of settings to reflect experimental conditions and instrument settings. The simulator is used to generate several distinct types of data. The data are subsequently employed to evaluate existing algorithms. Additionally, we employ simulation to determine the influence of instrument attributes and sample complexity on the ability of algorithms to recover information. The results give valuable hints on how to optimize experimental setups. Furthermore, this thesis introduces two quantitative approaches, namely a decharging algorithm based on integer linear programming and a new workflow for identification of differentially expressed proteins for a large in vitro study on toxic compounds. Decharging infers the uncharged mass of a peptide (or protein) by clustering all its charge variants. The latter occur frequently under certain experimental conditions. We employ simulation to show that decharging is robust against missing values even for high complexity data and that the algorithm outperforms other solutions in terms of mass accuracy and run time on real data. The last part of this thesis deals with a new state-of-the-art workflow for protein quantification based on isobaric tags for relative and absolute quantitation (iTRAQ). We devise a new approach to isotope correction, propose an experimental design, introduce new metrics of iTRAQ data quality, and confirm putative properties of iTRAQ data using a novel approach. All tools developed as part of this thesis are implemented in OpenMS, a C++ library for computational mass spectrometry.Rechnergestützte Massenspektrometrie steht seit Jahren im Fokus von Forschungsbestrebungen und erlangt immer mehr Aufmerksamkeit. Die Güte von Software bestimmt zu einem erheblichen Teil den Erfolg oder Misserfolg einer Datenanalyse. Neue experimentelle Möglichkeiten und Instrumentengenerationen erfordern die Anpassung bzw. Neuentwicklung von Algorithmen. Ein essentieller Gesichtspunkt der Algorithmenentwicklung ist die Validierung (oder auch der Vergleich) von Software auf einer möglichst großen Bandbreite an Eingabedaten. Eine Validierung erfordert einen Goldstandard, der meist durch manuelle Annotation eines Datensatzes erzeugt wird. Umfassende manuell annotierte, öffentliche Datensätze für Massenspektrometrie sind zeitaufwändig in der Herstellung und ihre Qualität hängt stark von den Fähigkeiten des Experten ab. Nicht alle Teile des Datensatzes sind annotierbar, da es teilweise hohe Rauschpegel und andere Störquellen gibt die eine zuverlässige Annotation verhindern. Weiterhin sind manuell annotierte Datensätze üblicherweise nicht für alle Ebenen eines Goldstandards verfügbar. Um dieses Dilemma zu beheben entwickelten wir die zurzeit umfassendste Simulationssoftware, welche viele Ebenen eines Goldstandards unterstützt, ebenso wie eine Vielzahl von Einstellungen, die es erlauben, viele experimentelle Bedingungen und Instrumenteneinstellungen nachzubilden. Der Simulator wird benutzt um mehrere verschiedenartige Datensätze zu erzeugen. Diese werden anschließend eingesetzt um existierende Algorithmen zu bewerten. Zusätzlich benutzen wir Simulationen um den Einfluss von Instrumenteneigenschaften und Probenkomplexität auf die Güte und Vollständigkeit der von Algorithmen extrahierten Informationen zu bestimmen. Die Ergebnisse geben wertvolle Hinweise für die Optimierung von Versuchsaufbauten. Zusätzlich führt diese Arbeit zwei quantitative Ansätze ein: einen Decharging-Algorithmus basierend auf ganzzahligen linearen Programmen sowie einen neuen Workflow für die Identifizierung von differentiell exprimierten Proteinen für eine große In-vitro-Studie zur Systemtoxikologie. Decharing inferiert die ungeladene Masse eines Peptids (oder Proteins) durch Clustering aller seiner Ladungsvarianten. Letztere entstehen häufig unter bestimmten experimentellen Bedingungen. Wir verwenden Simulationen, um zu zeigen, dass Decharging robust gegen Datenlücken sogar auf hochkomplexen Datensätzen ist, und dass der Algorithmus anderen Lösungen hinsichtlich der Massengenauigkeit und Laufzeit auf realen Daten überlegen ist. Der letzte Teil der Arbeit widmet sich einem modernen Workflow für Proteinquantifizierung mit Hilfe von iTRAQ (isobaric tags for relative and absolute quantitation). Wir stellen einen neuen Ansatz für Isotopenkorrektur vor, entwerfen ein experimentelles Design, konzipieren neue Metriken für die Datenqualität von iTRAQ-Daten und verifizieren vermutete Eigenschaften dieser Art von Daten anhand von neuen Verfahren. Alle Softwarewerkzeuge, die als Teil dieser Arbeit entstanden sind, wurden in OpenMS - einer C++-Bibliothek für Massenspektrometrie - implementiert

    Proteomics Quality Control: Quality Control Software for MaxQuant Results

    No full text
    Mass spectrometry-based proteomics coupled to liquid chromatography has matured into an automatized, high-throughput technology, producing data on the scale of multiple gigabytes per instrument per day. Consequently, an automated quality control (QC) and quality analysis (QA) capable of detecting measurement bias, verifying consistency, and avoiding propagation of error is paramount for instrument operators and scientists in charge of downstream analysis. We have developed an R-based QC pipeline called Proteomics Quality Control (PTXQC) for bottom-up LC–MS data generated by the MaxQuant software pipeline. PTXQC creates a QC report containing a comprehensive and powerful set of QC metrics, augmented with automated scoring functions. The automated scores are collated to create an overview heatmap at the beginning of the report, giving valuable guidance also to nonspecialists. Our software supports a wide range of experimental designs, including stable isotope labeling by amino acids in cell culture (SILAC), tandem mass tags (TMT), and label-free data. Furthermore, we introduce new metrics to score MaxQuant’s Match-between-runs (MBR) functionality by which peptide identifications can be transferred across Raw files based on accurate retention time and <i>m</i>/<i>z</i>. Last but not least, PTXQC is easy to install and use and represents the first QC software capable of processing MaxQuant result tables. PTXQC is freely available at https://github.com/cbielow/PTXQC

    On Mass Ambiguities in High-Resolution Shotgun Lipidomics

    No full text
    Mass-spectrometry-based lipidomics aims to identify as many lipid species as possible from complex biological samples. Due to the large combinatorial search space, unambiguous identification of lipid species is far from trivial. Mass ambiguities are common in direct-injection shotgun experiments, where an orthogonal separation (e.g., liquid chromatography) is missing. Using the rich information within available lipid databases, we generated a comprehensive rule set describing mass ambiguities, while taking into consideration the resolving power (and its decay) of different mass analyzers. Importantly, common adduct species and isotopic peaks are accounted for and are shown to play a major role, both for perfect mass overlaps due to identical sum formulas and resolvable mass overlaps. We identified known and hitherto unknown mass ambiguities in high- and ultrahigh resolution data, while also ranking lipid classes by their propensity to cause ambiguities. On the basis of this new set of ambiguity rules, guidelines and recommendations for experimentalists and software developers of what constitutes a solid lipid identification in both MS and MS/MS were suggested. For researchers new to the field, our results are a compact source of ambiguities which should be accounted for. These new findings also have implications for the selection of internal standards, peaks used for internal mass calibration, optimal choice of instrument resolution, and sample preparation, for example, in regard to adduct ion formation
    corecore