261 research outputs found

    Automated Analysis of Biomedical Data from Low to High Resolution

    Get PDF
    Recent developments of experimental techniques and instrumentation allow life scientists to acquire enormous volumes of data at unprecedented resolution. While this new data brings much deeper insight into cellular processes, it renders manual analysis infeasible and calls for the development of new, automated analysis procedures. This thesis describes how methods of pattern recognition can be used to automate three popular data analysis protocols: Chapter 1 proposes a method to automatically locate bimodal isotope distribution patterns in Hydrogen Deuterium Exchange Mass Spectrometry experiments. The method is based on L1-regularized linear regression and allows for easy quantitative analysis of co-populations with different exchange behavior. The sensitivity of the method is tested on a set of manually identified peptides, while its applicability to exploratory data analysis is validated by targeted follow-up peptide identification. Chapter 2 develops a technique to automate peptide quantification for mass spectrometry experiments, based on 16O/18O labeling of peptides. Two different spectrum segmentation algorithms are proposed: one based on image processing and applicable to low resolution data and one exploiting the sparsity of high resolution data. The quantification accuracy is validated on calibration datasets, produced by mixing a set of proteins in pre-defined ratios. Chapter 3 provides a method for automated detection and segmentation of synapses in electron microscopy images of neural tissue. For images acquired by scanning electron microscopy with nearly isotropic resolution, the algorithm is based on geometric features computed in 3D pixel neighborhoods. For transmission electron microscopy images with poor z-resolution, the algorithm uses additional regularization by performing several rounds of pixel classification with features computed on the probability maps of the previous classification round. The validation is performed by comparing the set of synapses detected by the algorithm against a gold standard detection by human experts. For data with nearly isotropic resolution, the algorithm performance is comparable to that of the human experts

    Review of Peak Detection Algorithms in Liquid-Chromatography-Mass Spectrometry

    Get PDF
    In this review, we will discuss peak detection in Liquid-Chromatography-Mass Spectrometry (LC/MS) from a signal processing perspective. A brief introduction to LC/MS is followed by a description of the major processing steps in LC/MS. Specifically, the problem of peak detection is formulated and various peak detection algorithms are described and compared

    Computational methods for metabolomic data analysis of ion mobility spectrometry data-reviewing the state of the art

    Get PDF
    Ion mobility spectrometry combined with multi-capillary columns (MCC/IMS) is a well known technology for detecting volatile organic compounds (VOCs). We may utilize MCC/IMS for scanning human exhaled air, bacterial colonies or cell lines, for example. Thereby we gain information about the human health status or infection threats. We may further study the metabolic response of living cells to external perturbations. The instrument is comparably cheap, robust and easy to use in every day practice. However, the potential of the MCC/IMS methodology depends on the successful application of computational approaches for analyzing the huge amount of emerging data sets. Here, we will review the state of the art and highlight existing challenges. First, we address methods for raw data handling, data storage and visualization. Afterwards we will introduce de-noising, peak picking and other pre-processing approaches. We will discuss statistical methods for analyzing correlations between peaks and diseases or medical treatment. Finally, we study up-to-date machine learning techniques for identifying robust biomarker molecules that allow classifying patients into healthy and diseased groups. We conclude that MCC/IMS coupled with sophisticated computational methods has the potential to successfully address a broad range of biomedical questions. While we can solve most of the data pre-processing steps satisfactorily, some computational challenges with statistical learning and model validation remain

    Biomedical Data Analysis with Prior Knowledge : Modeling and Learning

    Get PDF
    Modern research in biology and medicine is experiencing a data explosion in quantity and particularly in complexity. Efficient and accurate processing of these datasets demands state-of-the-art computational methods such as probabilistic graphical models, graph-based image analysis and many inference/optimization algorithms. However, the underlying complexity of biomedical experiments rules out direct out-of-the-box applications of these methods and requires novel formulation and enhancement to make them amendable to specific problems. This thesis explores novel approaches for incorporating prior knowledge into the data analysis workflow that leads to quantitative and meaningful interpretation of the datasets and also allows for sufficient user involvement. As discussed in Chapter 1, depending on the complexity of the prior knowledge, these approaches can be categorized as constrained modeling and learning. The first part of the thesis focuses on constrained modeling where the prior is normally explicitly represented as additional potential terms in the problem formulation. These terms prevent or discourage the downstream optimization of the formulation from yielding solutions that contradict the prior knowledge. In Chapter 2, we present a robust method for estimating and tracking the deuterium incorporation in the time-resolved hydrogen exchange (HX) mass spectrometry (MS) experiments with priors such as sparsity and sequential ordering. In Chapter 3, we introduce how to extend a classic Markov random field (MRF) model with a shape prior for cell nucleus segmentation. The second part of the thesis explores learning which addresses problems where the prior varies between different datasets or is too difficult to express explicitly. In this case, the prior is first abstracted as a parametric model and then its optimum parametrization is estimated from a training set using machine learning techniques. In Chapter 4, we extend the popular Rand Index in a cost-sensitive fashion and the problem-specific costs can be learned from manual scorings. This set of approaches becomes more interesting when the input/output becomes structured such as matrices or graphs. In Chapter 5, we present structured learning for cell tracking, a novel approach that learns optimum parameters automatically from a training set and allows for the use of a richer set of features which in turn affords improved tracking performance. Finally, conclusions and outlook are provided in Chapter 6

    Non-destructive quantification of tissue scaffolds and augmentation implants using X-ray microtomography

    No full text
    A three dimensional (3D), interconnected, porous structure is essential for bone tissue engineering scaffolds and skeletal augmentation implants. Current methods of characterising these structures, however, are limited to average properties such as percentage porosity. More accurate quantitative properties, such as pore and interconnect size distributions, are required. Once measured, these parameters need to be correlated to tissue regeneration and integration criteria, including solute transport, blood vessel regeneration, bone ingrowth, and mechanical properties. Ideally, these techniques would work in vitro and in vivo, and hence allow evaluation of osteoconduction and osseointegration after implantation. This thesis will focus on developing and applying algorithms for use with X-ray microtomography (micro-CT or μCT) which can non-destructively image internal structure at the micron scale. The technique will be demonstrated on two separate materials: bioactive glass scaffolds and titanium (Ti) augmentation devices. Using the developed techniques, the structural and compositional evolutions of bioactive glass scaffolds in a simulated body fluid (SBF) flow environment were quantified using micro-CT scans taken at different dissolution stages. Results show that 70S30C bioactive scaffolds retain favourable 3D structures during a 28 d dissolution experiment, with a modal equivalent pore diameter of 682 μm staying unchanged, and a modal equivalent interconnect diameter decreasing from 252 μm to 209 μm. The techniques were then applied to porous Ti augmentation scaffolds. These scaffolds, produced by selective laser melting have very different pore networks with graded randomness and unit size. They present new challenges when applying the developed micro-CT quantification techniques. Using a further adapted methodology, the interconnecting pore sizes, strut thickness, and surface roughness were measured. This demonstrated the robustness of the methodologies and their applicability to a range of tissue scaffolds and augmentation devices

    Use of X-ray Computed Microtomography to Measure the Leaching Behaviour of Metal Sulphide Ores

    Get PDF
    Heap leaching is an important hydrometallurgical method to extract valuable metals from ores, especially low grade ores. The main disadvantages of heap leaching are the long processing time and low extraction efficiencies. Currently, a major barrier in fully understanding the leaching process is the study of the mass transport and surface chemistry at individual ore particle and mineral grain scale. This thesis describes a combined experimental and modelling approach to visualise, quantify and predict the leach behaviour based on X-ray Computed Microtomography (XMT, or micro-CT). An automatic image processing package was developed to process the 3D volume data. Individual ore particles as well as individual mineral grains can be tracked using a centroid tracking algorithm and a novel fast tracking algorithm respectively. The systematic and random errors and uncertainties in the image volume measurements were quantified. It was found that both the systematic and random errors are a strong function of the grain size relative to the voxel size. The random error can be reduced by combining the results from either multiple scans of the same object or scans of multiple similar objects while the systematic error can be eliminated by using volume standards. The leach performance for a leaching column was quantified at different scales and it was found that the leach behaviour and its variability were difficult to quantify at large scales (column and individual ore particle scale), but can be quantified at mineral grain scale by using a novel statistical analysis method. The tracked grains were divided into different size-distance categories to analyse the average leach performance and the variation for each category. Both grain size and distance dependencies were observed. The size dependency is more dominant at the early stage of leaching whereas the distance dependency can significantly influence the ultimate recovery. A method for using the data to estimate the variability in the in-situ surface kinetics was also developed. A model for simulating the grain dissolution and the resultant kinetics based directly on XMT based 3D volume is introduced. The simulations were able to accurately predict both the overall leaching trends, as well as the leaching behaviour of mineral grains in classes based on their size and distance to the particle surface.Open Acces

    Occurrence and fate of micro- and nanoplastic in the terrestrial environment

    Get PDF
    The worldwide production of plastic has grown exponentially since the 1950’s and revolutionized our daily life. Simultaneously, plastic pollution in the environment has become a global issue and micro- (MP) and nanoplastics (NP) have now been detected even in the most remote ecosystems. There is currently a data gap due to a lack of analytical methods on the occurrence and characterisation of two highly relevant categories of plastic in the soil environment: tire wear particles (TWP), which concentration in the environment is expected to be high and carry toxic additives, and NP, which toxicity has been demonstrated on soil organisms and is characterized by its ability to cross cell membranes. The effects of micro- and nanoplastic (MNP) on their surrounding environment are determined by their size, morphology, surface characteristics and chemical composition, which can be affected by soil residence time. As the soil is often considered as sink for MNP, it is crucial to investigate and understand the different weathering factors which might affect the MNP properties. To address these knowledge gaps, three main objectives were identified in the scope of this study: develop an extraction and single particle identification method for the quantification and characterisation of (i) TWP and (ii) NP in soil samples and (iii) characterise the physico-chemical properties at the surface of plastic debris occurring in the soil environment, as well as assess the effect of soil and UV weathering as single ageing factors. In order to realise the first objective, a method of extraction and identification of TWP in soil samples based on their black colour was developed using optical microscopy. Cryo-grinded TWP down to a size of 35 μm could be detected with a >85% but the tests conducted with environmental TWP showed that the density used in this study was not efficient to separate the whole range of TWP occurring in different densities. Yet, TWP concentration in highway adjacent soil samples ranged between 8084 ±1059 and 2562 ± 1160 TWP kg-1 dry soil and showed similar trends and magnitude order than previously reported concentrations. Thus, the developed protocol was estimated sufficiently accurate for TWP monitoring in soil samples. Regarding the second objective, an extraction and identification method for NP in soil samples was developed using X-ray spectro-microscopy (STX-NEXAFS). The results demonstrated the suitability of the technique for the imaging and chemical characterisation of individual NP with a minimum dimension of ≈100 nm and its application to the analysis of pure NP and for NP present in environmental and food matrices. However, it was not possible to obtain quantitative data on the NP present in the samples, as the method was too time consuming to allow the measurement of a high number of particles. For the last objective, STXM-NEXAFS was applied to the characterisation of the surface alterations of natural-soil weathered, soil-incubated and UV exposed polymers. A surface alteration on a depth varying between 150 and 1000 nm on could be observed and the analysis of the replicate’s measurement acquired on the same plastic debris highlighted the heterogeneity of the processes affecting polymers surface. The comparison of UV weathered and natural-soil weathered samples showed that the two treatments led to different surface alterations and the absence of surface alteration after one-year soil incubation indicated slow aging of polymers in this medium. Moreover, the very first step of surface fragmentation was observed on a PS fragment, providing an insight on the factors and processes leading to the release of MP and NP in soils. Overall, the present research contributed significantly to the development of innovative methods to characterise MNP in the soil environment. The results obtained helped to provide ground information on the characteristic of environmental MP and NP, which is of high importance to design ecotoxicological test using environmentally relevant material as well as validate predictive models to better understand the potential risk that MP and NP represent for the ecosystems
    corecore