1,766 research outputs found

    Encoding, Storing and Searching of Analytical Properties and Assigned Metabolite Structures

    Get PDF
    Informationen über Metabolite und andere kleine organische Moleküle sind von entscheidender Bedeutung in vielen verschiedenen Bereichen der Naturwissenschaften. Sie spielen z.B. eine entscheidende Rolle in metabolischen Netzwerken und das Wissen über ihre Eigenschaften, hilft komplexe biologische Prozesse und komplette biologische Systeme zu verstehen. Da in biologischen und chemischen Laboren täglich Daten anfallen, welche diese Moleküle beschreiben, existiert eine umfassende Datengrundlage, die sich kontinuierlich erweitert. Um Wissenschaftlern die Verarbeitung, den Austausch, die Archivierung und die Suche innerhalb dieser Informationen unter Erhaltung der semantischen Zusammenhänge zu ermöglichen, sind komplexe Softwaresysteme und Datenformate nötig. Das Ziel dieses Projektes bestand darin, Anwendungen und Algorithmen zu entwickeln, welche für die effiziente Kodierung, Sammlung, Normalisierung und Analyse molekularer Daten genutzt werden können. Diese sollen Wissenschaftler bei der Strukturaufklärung, der Dereplikation, der Analyse von molekularen Wechselwirkungen und bei der Veröffentlichung des so gewonnenen Wissens unterstützen. Da die direkte Beschreibung der Struktur und der Funktionsweise einer unbekannten Verbindung sehr schwierig und aufwändig ist, wird dies hauptsächlich indirekt, mit Hilfe beschreibender Eigenschaften erreicht. Diese werden dann zur Vorhersage struktureller und funktioneller Charakteristika genutzt. In diesem Zusammenhang wurden Programmmodule entwickelt, welche sowohl die Visualisierung von Struktur- und Spektroskopiedaten, die gegliederte Darstellung und Veränderung von Metadaten und Eigenschaften, als auch den Import und Export von verschiedenen Datenformaten erlauben. Diese wurden durch Methoden erweitert, welche es ermöglichen, die gewonnenen Informationen weitergehend zu analysieren und Struktur- und Spektroskopiedaten einander zuzuweisen. Außerdem wurde ein System zur strukturierten Archivierung und Verwaltung großer Mengen molekularer Daten und spektroskopischer Informationen, unter Beibehaltung der semantischen Zusammenhänge, sowohl im Dateisystem, als auch in Datenbanken, entwickelt. Um die verlustfreie Speicherung zu gewährleisten, wurde ein offenes und standardisiertes Datenformat definiert (CMLSpect). Dieses erweitert das existierende CML (Chemical Markup Language) Vokabular und erlaubt damit die einfache Handhabung von verknüpften Struktur- und Spektroskopiedaten. Die entwickelten Anwendungen wurden in das Bioclipse System für Bio- und Chemoinformatik eingebunden und bieten dem Nutzer damit eine hochqualitative Benutzeroberfläche und dem Entwickler eine leicht zu erweiternde modulare Programmarchitektur

    Integration and visualisation of clinical-omics datasets for medical knowledge discovery

    Get PDF
    In recent decades, the rise of various omics fields has flooded life sciences with unprecedented amounts of high-throughput data, which have transformed the way biomedical research is conducted. This trend will only intensify in the coming decades, as the cost of data acquisition will continue to decrease. Therefore, there is a pressing need to find novel ways to turn this ocean of raw data into waves of information and finally distil those into drops of translational medical knowledge. This is particularly challenging because of the incredible richness of these datasets, the humbling complexity of biological systems and the growing abundance of clinical metadata, which makes the integration of disparate data sources even more difficult. Data integration has proven to be a promising avenue for knowledge discovery in biomedical research. Multi-omics studies allow us to examine a biological problem through different lenses using more than one analytical platform. These studies not only present tremendous opportunities for the deep and systematic understanding of health and disease, but they also pose new statistical and computational challenges. The work presented in this thesis aims to alleviate this problem with a novel pipeline for omics data integration. Modern omics datasets are extremely feature rich and in multi-omics studies this complexity is compounded by a second or even third dataset. However, many of these features might be completely irrelevant to the studied biological problem or redundant in the context of others. Therefore, in this thesis, clinical metadata driven feature selection is proposed as a viable option for narrowing down the focus of analyses in biomedical research. Our visual cortex has been fine-tuned through millions of years to become an outstanding pattern recognition machine. To leverage this incredible resource of the human brain, we need to develop advanced visualisation software that enables researchers to explore these vast biological datasets through illuminating charts and interactivity. Accordingly, a substantial portion of this PhD was dedicated to implementing truly novel visualisation methods for multi-omics studies.Open Acces

    NUC BMAS

    Get PDF

    Ancient and historical systems

    Get PDF

    Catchment Modelling Tools and Pathways Review

    Get PDF

    Pharmacoproteomic characterisation of human colon and rectal cancer

    Get PDF
    Most molecular cancer therapies act on protein targets but data on the proteome status of patients and cellular models for proteome-guided pre-clinical drug sensitivity studies are only beginning to emerge. Here, we profiled the proteomes of 65 colorectal cancer (CRC) cell lines to a depth of > 10,000 proteins using mass spectrometry. Integration with proteomes of 90 CRC patients and matched transcriptomics data defined integrated CRC subtypes, highlighting cell lines representative of each tumour subtype. Modelling the responses of 52 CRC cell lines to 577 drugs as a function of proteome profiles enabled predicting drug sensitivity for cell lines and patients. Among many novel associations, MERTK was identified as a predictive marker for resistance towards MEK1/2 inhibitors and immunohistochemistry of 1,074 CRC tumours confirmed MERTK as a prognostic survival marker. We provide the proteomic and pharmacological data as a resource to the community to, for example, facilitate the design of innovative prospective clinical trials. © 2017 The Authors. Published under the terms of the CC BY 4.0 licens

    Optimisation of microfluidic experiments for model calibration of a synthetic promoter in S. cerevisiae

    Get PDF
    This thesis explores, implements, and examines the methods to improve the efficiency of model calibration experiments for synthetic biological circuits in three aspects: experimental technique, optimal experimental design (OED), and automatic experiment abnormality screening (AEAS). Moreover, to obtain a specific benchmark that provides clear-cut evidence of the utility, an integrated synthetic orthogonal promoter in yeast (S. cerevisiae) and a corresponded model is selected as the experiment object. This work first focuses on the “wet-lab” part of the experiment. It verifies the theoretical benefit of adopting microfluidic technique by carrying out a series of in-vivo experiments on a developed automatic microfluidic experimental platform. Statistical analysis shows that compared to the models calibrated with flow-cytometry data (a representative traditional experimental technique), the models based on microfluidic data of the same experiment time give significantly more accurate behaviour predictions of never-encountered stimuli patterns. In other words, compare to flow-cytometry experiments, microfluidics can obtain models of the required prediction accuracy within less experiment time. The next aspect is to optimise the “dry-lab” part, i.e., the design of experiments and data processing. Previous works have proven that the informativeness of experiments can be improved by optimising the input design (OID). However, the amount of work and the time cost of the current OID approach rise dramatically with large and complex synthetic networks and mathematical models. To address this problem, this thesis introduces the parameter clustering analysis and visualisation (PCAV) to speed up the OID by narrowing down the parameters of interest. For the first time, this thesis proposes a parameter clustering algorithm based on the Fisher information matrix (FIMPC). Practices with in-silico experiments on the benchmarking promoter show that PCAV reduces the complexity of OID and provides a new way to explore the connections between parameters. Moreover, the analysis shows that experiments with FIMPC-based OID lead to significantly more accurate parameter estimations than the current OID approach. Automatic abnormality screening is the third aspect. For microfluidic experiments, the current identification of invalid microfluidic experiments is carried out by visual checks of the microscope images by experts after the experiments. To improve the automation level and robustness of this quality control process, this work develops an automatic experiment abnormality screening (AEAS) system supported by convolutional neural networks (CNNs). The system learns the features of six abnormal experiment conditions from images taken in actual microfluidic experiments and achieves identification within seconds in the application. The training and validation of six representative CNNs of different network depths and design strategies show that some shallow CNNs can already diagnose abnormal conditions with the desired accuracy. Moreover, to improve the training convergence of deep CNNs with small data sets, this thesis proposes a levelled-training method and improves the chance of convergence from 30% to 90%. With a benchmark of a synthetic promoter model in yeast, this thesis optimises model calibration experiments in three aspects to achieve a more efficient procedure: experimental technique, optimal experimental design (OED), and automatic experiment abnormality screening (AEAS). In this study, the efficiency of model calibration experiments for the benchmarking model can be improved by: adopting microfluidics technology, applying CAVP parameter analysis and FIMPC-based OID, and setting up an AEAS system supported by CNN. These contributions have the potential to be exploited for designing more efficient in-vivo experiments for model calibration in similar studies

    Analysis of High-dimensional and Left-censored Data with Applications in Lipidomics and Genomics

    Get PDF
    Recently, there has been an occurrence of new kinds of high- throughput measurement techniques enabling biological research to focus on fundamental building blocks of living organisms such as genes, proteins, and lipids. In sync with the new type of data that is referred to as the omics data, modern data analysis techniques have emerged. Much of such research is focusing on finding biomarkers for detection of abnormalities in the health status of a person as well as on learning unobservable network structures representing functional associations of biological regulatory systems. The omics data have certain specific qualities such as left-censored observations due to the limitations of the measurement instruments, missing data, non-normal observations and very large dimensionality, and the interest often lies in the connections between the large number of variables. There are two major aims in this thesis. First is to provide efficient methodology for dealing with various types of missing or censored omics data that can be used for visualisation and biomarker discovery based on, for example, regularised regression techniques. Maximum likelihood based covariance estimation method for data with censored values is developed and the algorithms are described in detail. Second major aim is to develop novel approaches for detecting interactions displaying functional associations from large-scale observations. For more complicated data connections, a technique based on partial least squares regression is investigated. The technique is applied for network construction as well as for differential network analyses both on multiple imputed censored data and next- generation sequencing count data.Uudet mittausteknologiat ovat mahdollistaneet kokonaisvaltaisen ymmärryksen lisäämisen elollisten organismien molekyylitason prosesseista. Niin kutsutut omiikka-teknologiat, kuten genomiikka, proteomiikka ja lipidomiikka, kykenevät tuottamaan valtavia määriä mittausdataa yksittäisten geenien, proteiinien ja lipidien ekspressio- tai konsentraatiotasoista ennennäkemättömällä tarkkuudella. Samanaikaisesti tarve uusien analyysimenetelmien kehittämiselle on kasvanut. Kiinnostuksen kohteena ovat olleet erityisesti tiettyjen sairauksien riskiä tai prognoosia ennustavien merkkiaineiden tunnistaminen sekä biologisten verkkojen rekonstruointi. Omiikka-aineistoilla on useita erityisominaisuuksia, jotka rajoittavat tavanomaisten menetelmien suoraa ja tehokasta soveltamista. Näistä tärkeimpiä ovat vasemmalta sensuroidut ja puuttuvat havainnot, sekä havaittujen muuttujien suuri lukumäärä. Tämän väitöskirjan ensimmäisenä tavoitteena on tarjota räätälöityjä analyysimenetelmiä epätäydellisten omiikka-aineistojen visualisointiin ja mallin valintaan käyttäen esimerkiksi regularisoituja regressiomalleja. Kuvailemme myös sensuroidulle aineistolle sopivan suurimman uskottavuuden estimaattorin kovarianssimatriisille. Toisena tavoitteena on kehittää uusia menetelmiä omiikka-aineistojen assosiaatiorakenteiden tarkasteluun. Monimutkaisempien rakenteiden tarkasteluun, visualisoimiseen ja vertailuun esitetään erilaisia variaatioita osittaisen pienimmän neliösumman menetelmään pohjautuvasta algoritmista, jonka avulla voidaan rekonstruoida assosiaatioverkkoja sekä multi-imputoidulle sensuroidulle että lukumääräaineistoille.Siirretty Doriast

    Visualisation of nanoparticle-cell interactions by correlative microscopy

    Get PDF
    corecore