30 research outputs found

    Image Processing and Simulation Toolboxes of Microscopy Images of Bacterial Cells

    Get PDF
    Recent advances in microscopy imaging technology have allowed the characterization of the dynamics of cellular processes at the single-cell and single-molecule level. Particularly in bacterial cell studies, and using the E. coli as a case study, these techniques have been used to detect and track internal cell structures such as the Nucleoid and the Cell Wall and fluorescently tagged molecular aggregates such as FtsZ proteins, Min system proteins, inclusion bodies and all the different types of RNA molecules. These studies have been performed with using multi-modal, multi-process, time-lapse microscopy, producing both morphological and functional images. To facilitate the finding of relationships between cellular processes, from small-scale, such as gene expression, to large-scale, such as cell division, an image processing toolbox was implemented with several automatic and/or manual features such as, cell segmentation and tracking, intra-modal and intra-modal image registration, as well as the detection, counting and characterization of several cellular components. Two segmentation algorithms of cellular component were implemented, the first one based on the Gaussian Distribution and the second based on Thresholding and morphological structuring functions. These algorithms were used to perform the segmentation of Nucleoids and to identify the different stages of FtsZ Ring formation (allied with the use of machine learning algorithms), which allowed to understand how the temperature influences the physical properties of the Nucleoid and correlated those properties with the exclusion of protein aggregates from the center of the cell. Another study used the segmentation algorithms to study how the temperature affects the formation of the FtsZ Ring. The validation of the developed image processing methods and techniques has been based on benchmark databases manually produced and curated by experts. When dealing with thousands of cells and hundreds of images, these manually generated datasets can become the biggest cost in a research project. To expedite these studies in terms of time and lower the cost of the manual labour, an image simulation was implemented to generate realistic artificial images. The proposed image simulation toolbox can generate biologically inspired objects that mimic the spatial and temporal organization of bacterial cells and their processes, such as cell growth and division and cell motility, and cell morphology (shape, size and cluster organization). The image simulation toolbox was shown to be useful in the validation of three cell tracking algorithms: Simple Nearest-Neighbour, Nearest-Neighbour with Morphology and DBSCAN cluster identification algorithm. It was shown that the Simple Nearest-Neighbour still performed with great reliability when simulating objects with small velocities, while the other algorithms performed better for higher velocities and when there were larger clusters present

    Image Analysis Algorithms for Single-Cell Study in Systems Biology

    Get PDF
    With the contiguous shift of biology from a qualitative toward a quantitative field of research, digital microscopy and image-based measurements are drawing increased interest. Several methods have been developed for acquiring images of cells and intracellular organelles. Traditionally, acquired images are analyzed manually through visual inspection. The increasing volume of data is challenging the scope of manual analysis, and there is a need to develop methods for automated analysis. This thesis examines the development and application of computational methods for acquisition and analysis of images from single-cell assays. The thesis proceeds with three different aspects.First, a study evaluates several methods for focusing microscopes and proposes a novel strategy to perform focusing in time-lapse imaging. The method relies on the nature of the focus-drift and its predictability. The study shows that focus-drift is a dynamical system with a small randomness. Therefore, a prediction-based method is employed to track the focus-drift overtime. A prototype implementation of the proposed method is created by extending the Nikon EZ-C1 Version 3.30 (Tokyo, Japan) imaging platform for acquiring images with a Nikon Eclipse (TE2000-U, Nikon, Japan) microscope.Second, a novel method is formulated to segment individual cells from a dense cluster. The method incorporates multi-resolution analysis with maximum-likelihood estimation (MAMLE) for cell detection. The MAMLE performs cell segmentation in two phases. The initial phase relies on a cutting-edge filter, edge detection in multi-resolution with a morphological operator, and threshold decomposition for adaptive thresholding. It estimates morphological features from the initial results. In the next phase, the final segmentation is constructed by boosting the initial results with the estimated parameters. The MAMLE method is evaluated with de novo data sets as well as with benchmark data from public databases. An empirical evaluation of the MAMLE method confirms its accuracy.Third, a comparative study is carried out on performance evaluation of state-ofthe-art methods for the detection of subcellular organelles. This study includes eleven algorithms developed in different fields for segmentation. The evaluation procedure encompasses a broad set of samples, ranging from benchmark data to synthetic images. The result from this study suggests that there is no particular method which performs superior to others in the test samples. Next, the effect of tetracycline on transcription dynamics of tetA promoter in Escherichia coli (E. coli ) cells is studied. This study measures expressions of RNA by tagging the MS2d-GFP vector with a target gene. The RNAs are observed as intracellular spots in confocal images. The kernel density estimation (KDE) method for detecting the intracellular spots is employed to quantify the individual RNA molecules.The thesis summarizes the results from five publications. Most of the publications are associated with different methods for imaging and analysis of microscopy. Confocal images with E. coli cells are targeted as the primary area of application. However, potential applications beyond the primary target are also made evident. The findings of the research are confirmed empirically

    New Methods to Improve Large-Scale Microscopy Image Analysis with Prior Knowledge and Uncertainty

    Get PDF
    Multidimensional imaging techniques provide powerful ways to examine various kinds of scientific questions. The routinely produced data sets in the terabyte-range, however, can hardly be analyzed manually and require an extensive use of automated image analysis. The present work introduces a new concept for the estimation and propagation of uncertainty involved in image analysis operators and new segmentation algorithms that are suitable for terabyte-scale analyses of 3D+t microscopy images

    Mapping Trabecular Bone Fabric Tensor by in Vivo Magnetic Resonance Imaging

    Get PDF
    The mechanical competence of bone depends upon its quantity, structural arrangement, and chemical composition. Assessment of these factors is important for the evaluation of bone integrity, particularly as the skeleton remodels according to external (e.g. mechanical loading) and internal (e.g. hormonal changes) stimuli. Micro magnetic resonance imaging (µMRI) has emerged as a non-invasive and non-ionizing method well-suited for the repeated measurements necessary for monitoring changes in bone integrity. However, in vivo image-based directional dependence of trabecular bone (TB) has not been linked to mechanical competence or fracture risk despite the existence of convincing ex vivo evidence. The objective of this dissertation research was to develop a means of capturing the directional dependence of TB by assessing a fabric tensor on the basis of in vivo µMRI. To accomplish this objective, a novel approach for calculating the TB fabric tensor based on the spatial autocorrelation function was developed and evaluated in the presence of common limitations to in vivo µMRI. Comparisons were made to the standard technique of mean-intercept-length (MIL). Relative to MIL, ACF was identified as computationally faster by over an order of magnitude and more robust within the range of the resolutions and SNRs achievable in vivo. The potential for improved sensitivity afforded by isotropic resolution was also investigated in an improved µMR imaging protocol at 3T. Measures of reproducibility and reliability indicate the potential of images with isotropic resolution to provide enhanced sensitivity to orientation-dependent measures of TB, however overall reproducibility suffered from the sacrifice in SNR. Finally, the image-derived TB fabric tensor was validated through its relationship with TB mechanical competence in specimen and in vivo µMR images. The inclusion of trabecular bone fabric measures significantly improved the bone volume fraction-based prediction of elastic constants calculated by micro-finite element analysis. This research established a method for detecting TB fabric tensor in vivo and identified the directional dependence of TB as an important determinant of TB mechanical competence

    New Methods to Improve Large-Scale Microscopy Image Analysis with Prior Knowledge and Uncertainty

    Get PDF
    Multidimensional imaging techniques provide powerful ways to examine various kinds of scientific questions. The routinely produced datasets in the terabyte-range, however, can hardly be analyzed manually and require an extensive use of automated image analysis. The present thesis introduces a new concept for the estimation and propagation of uncertainty involved in image analysis operators and new segmentation algorithms that are suitable for terabyte-scale analyses of 3D+t microscopy images.Comment: 218 pages, 58 figures, PhD thesis, Department of Mechanical Engineering, Karlsruhe Institute of Technology, published online with KITopen (License: CC BY-SA 3.0, http://dx.doi.org/10.5445/IR/1000057821

    River flow monitoring: LS-PIV technique, an image-based method to assess discharge

    Get PDF
    The measurement of the river discharge within a natural ort artificial channel is still one of the most challenging tasks for hydrologists and the scientific community. Although discharge is a physical quantity that theoretically can be measured with very high accuracy, since the volume of water flows in a well-defined domain, there are numerous critical issues in obtaining a reliable value. Discharge cannot be measured directly, so its value is obtained by coupling a measurement of a quantity related to the volume of flowing water and the area of a channel cross-section. Direct measurements of current velocity are made, traditionally with instruments such as current meters. Although measurements with current meters are sufficiently accurate and even if there are universally recognized standards for the current application of such instruments, they are often unusable under specific flow conditions. In flood conditions, for example, due to the need for personnel to dive into the watercourse, it is impossible to ensure adequate safety conditions to operators for carrying out flow measures. Critical issue arising from the use of current meters has been partially addressed thanks to technological development and the adoption of acoustic sensors. In particular, with the advent of Acoustic Doppler Current Profilers (ADCPs), flow measurements can take place without personnel having direct contact with the flow, performing measurements either from the bridge or from the banks. This made it possible to extend the available range of discharge measurements. However, the flood conditions of a watercourse also limit the technology of ADCPs. The introduction of the instrument into the current with high velocities and turbulence would put the instrument itself at serious risk, making it vulnerable and exposed to damage. In the most critical case, the instrument could be torn away by the turbulent current. On the other hand, considering smaller discharges, both current meters and ADCPs are technologically limited in their measurement as there are no adequate water levels for the use of the devices. The difficulty in obtaining information on the lowest and highest values of discharge has important implications on how to define the relationships linking flows to water levels. The stage-discharge relationship is one of the tools through which it is possible to monitor the flow in a specific section of a watercourse. Through this curve, a discharge value can be obtained from knowing the water stage. Curves are site-specific and must be continuously updated to account for changes in geometry that the sections for which they are defined may experience over time. They are determined by making simultaneous discharge and stage measurements. Since instruments such as current meters and ADCPs are traditionally used, stage-discharge curves suffer from instrumental limitations. So, rating curves are usually obtained by interpolation of field-measured data and by extrapolate them for the highest and the lowest discharge values, with a consequent reduction in accuracy. This thesis aims to identify a valid alternative to traditional flow measurements and to show the advantages of using new methods of monitoring to support traditional techniques, or to replace them. Optical techniques represent the best solution for overcoming the difficulties arising from the adoption of a traditional approach to flow measurement. Among these, the most widely used techniques are the Large-Scale Particle Image Velocimetry (LS-PIV) and the Large-Scale Particle Tracking Velocimetry. They are able to estimate the surface velocity fields by processing images representing a moving tracer, suitably dispersed on the liquid surface. By coupling velocity data obtained from optical techniques with geometry of a cross-section, a discharge value can easily be calculated. In this thesis, the study of the LS-PIV technique was deepened, analysing the performance of the technique, and studying the physical and environmental parameters and factors on which the optical results depend. As the LS-PIV technique is relatively new, there are no recognized standards available for the proper application of the technique. A preliminary numerical analysis was conducted to identify the factors on which the technique is significantly dependent. The results of these analyses enabled the development of specific guidelines through which the LS-PIV technique could subsequently be applied in open field during flow measurement campaigns in Sicily. In this way it was possible to observe experimentally the criticalities involved in applying the technique on real cases. These measurement campaigns provided the opportunity to carry out analyses on field case studies and structure an automatic procedure for optimising the LS-PIV technique. In all case studies it was possible to observe how the turbulence phenomenon is a worsening factor in the output results of the LS-PIV technique. A final numerical analysis was therefore performed to understand the influence of turbulence factor on the performance of the technique. The results obtained represent an important step for future development of the topic

    Dataset shift in land-use classification for optical remote sensing

    Get PDF
    Multimodal dataset shifts consisting of both concept and covariate shifts are addressed in this study to improve texture-based land-use classification accuracy for optical panchromatic and multispectral remote sensing. Multitemporal and multisensor variances between train and test data are caused by atmospheric, phenological, sensor, illumination and viewing geometry differences, which cause supervised classification inaccuracies. The first dataset shift reduction strategy involves input modification through shadow removal before feature extraction with gray-level co-occurrence matrix and local binary pattern features. Components of a Rayleigh quotient-based manifold alignment framework is investigated to reduce multimodal dataset shift at the input level of the classifier through unsupervised classification, followed by manifold matching to transfer classification labels by finding across-domain cluster correspondences. The ability of weighted hierarchical agglomerative clustering to partition poorly separated feature spaces is explored and weight-generalized internal validation is used for unsupervised cardinality determination. Manifold matching solves the Hungarian algorithm with a cost matrix featuring geometric similarity measurements that assume the preservation of intrinsic structure across the dataset shift. Local neighborhood geometric co-occurrence frequency information is recovered and a novel integration thereof is shown to improve matching accuracy. A final strategy for addressing multimodal dataset shift is multiscale feature learning, which is used within a convolutional neural network to obtain optimal hierarchical feature representations instead of engineered texture features that may be sub-optimal. Feature learning is shown to produce features that are robust against multimodal acquisition differences in a benchmark land-use classification dataset. A novel multiscale input strategy is proposed for an optimized convolutional neural network that improves classification accuracy to a competitive level for the UC Merced benchmark dataset and outperforms single-scale input methods. All the proposed strategies for addressing multimodal dataset shift in land-use image classification have resulted in significant accuracy improvements for various multitemporal and multimodal datasets.Thesis (PhD)--University of Pretoria, 2016.National Research Foundation (NRF)University of Pretoria (UP)Electrical, Electronic and Computer EngineeringPhDUnrestricte

    Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries

    Get PDF
    This two-volume set LNCS 12962 and 12963 constitutes the thoroughly refereed proceedings of the 7th International MICCAI Brainlesion Workshop, BrainLes 2021, as well as the RSNA-ASNR-MICCAI Brain Tumor Segmentation (BraTS) Challenge, the Federated Tumor Segmentation (FeTS) Challenge, the Cross-Modality Domain Adaptation (CrossMoDA) Challenge, and the challenge on Quantification of Uncertainties in Biomedical Image Quantification (QUBIQ). These were held jointly at the 23rd Medical Image Computing for Computer Assisted Intervention Conference, MICCAI 2020, in September 2021. The 91 revised papers presented in these volumes were selected form 151 submissions. Due to COVID-19 pandemic the conference was held virtually. This is an open access book

    New Methods to Improve Large-Scale Microscopy Image Analysis with Prior Knowledge and Uncertainty

    Get PDF
    Multidimensional imaging techniques provide powerful ways to examine various kinds of scientific questions. The routinely produced data sets in the terabyte-range, however, can hardly be analyzed manually and require an extensive use of automated image analysis. The present work introduces a new concept for the estimation and propagation of uncertainty involved in image analysis operators and new segmentation algorithms that are suitable for terabyte-scale analyses of 3D+t microscopy images
    corecore