88 research outputs found

    Cellular Pattern Quantication and Automatic Bench-marking Data-set Generation on confocal microscopy images

    Get PDF
    The distribution, directionality and motility of the actin fibers control cell shape, affect cell function and are different in cancer versus normal cells. Quantification of actin structural changes is important for further understanding differences between cell types and for elucidation the effects and dynamics of drug interactions. We propose an image analysis framework to quantify the F-actin organization patterns in response to different pharmaceutical treatments.The main problems addressed include which features to quantify and what quantification measurements to compute when dealing with unlabeled confocal microscopy images. The resultant numerical features are very effective to profile the functional mechanism and facilitate the comparison of different drugs. The analysis software is originally implemented in Matlab and more recently the most time consuming part in the feature extraction stage is implemented onto the NVIDIA GPU using CUDA where we obtain 15 to 20 speedups for different sizes of image. We also propose a computational framework for generating synthetic images for validation purposes. The validation for the feature extraction is done by visual inspection and the validation for quantification is done by comparing them with well-known biological facts. Future studies will further validate the algorithms, and elucidate the molecular pathways and kinetics underlying the F-actin changes. This is the first study quantifying different structural formations of the same protein in intact cells. Since many anti-cancer drugs target the cytoskeleton, we believe that the quantitative image analysis method reported here will have broad applications to understanding the mechanisms of candidate pharmaceutical

    Multi-Pass Fast Watershed for Accurate Segmentation of Overlapping Cervical Cells

    Get PDF

    A Comprehensive Overview of Computational Nuclei Segmentation Methods in Digital Pathology

    Full text link
    In the cancer diagnosis pipeline, digital pathology plays an instrumental role in the identification, staging, and grading of malignant areas on biopsy tissue specimens. High resolution histology images are subject to high variance in appearance, sourcing either from the acquisition devices or the H\&E staining process. Nuclei segmentation is an important task, as it detects the nuclei cells over background tissue and gives rise to the topology, size, and count of nuclei which are determinant factors for cancer detection. Yet, it is a fairly time consuming task for pathologists, with reportedly high subjectivity. Computer Aided Diagnosis (CAD) tools empowered by modern Artificial Intelligence (AI) models enable the automation of nuclei segmentation. This can reduce the subjectivity in analysis and reading time. This paper provides an extensive review, beginning from earlier works use traditional image processing techniques and reaching up to modern approaches following the Deep Learning (DL) paradigm. Our review also focuses on the weak supervision aspect of the problem, motivated by the fact that annotated data is scarce. At the end, the advantages of different models and types of supervision are thoroughly discussed. Furthermore, we try to extrapolate and envision how future research lines will potentially be, so as to minimize the need for labeled data while maintaining high performance. Future methods should emphasize efficient and explainable models with a transparent underlying process so that physicians can trust their output.Comment: 47 pages, 27 figures, 9 table

    Image segmentation and object classification for automatic detection of tuberculosis in sputum smears

    Get PDF
    Includes bibliographical references (leaves 95-101).An automated microscope is being developed in the MRC/UCT Medical Imaging Research Unit at the University of Cape Town in an effort to ease the workload of laboratory technicians screening sputum smears for tuberculosis (TB), in order to improve screening in countries with a heavy burden of TB. As a step in the development of such a microscope, the project described here was concerned with the extraction and identification of TB bacilli in digital images of sputum smears obtained with a microscope. The investigations were carried out on Ziehl-Neelsen (ZN) stained sputum smears. Different image segmentation methods were compared and object classification was implemented using various two-class classifiers, for images obtained using a microscope with 100x objective lens magnification. The bacillus identification route established for the 100x images, was applied to images obtained using a microscope with 20x objective lens magnification. In addition, one-class classification was applied the 100x images. A combination of pixel classifiers performed best in image segmentation to extract objects of interest. For 100x images, the product of the Bayes’, quadratic and logistic linear classifiers resulted in a percentage of correctly classified bacillus pixels of 89.38%; 39.52% of pixels were incorrectly classified. The segmentation method did not miss any bacillus objects with their length in the focal plane of an image. The biggest source of error for the segmentation method was staining inconsistencies. The pixel segmentation method performed poorly on images with 20x magnification. Geometric change invariant features were extracted to describe segmented objects; Fourier coefficients, moment invariant features and colour features were used. All two-class object classifiers had balanced performance for 100x images, with sensitivity and specificity above 95% for the detection of an individual bacillus after Fisher mapping of the feature set. Object classification on images with 20x magnification performed similarly. One-class object classification using the mixture of Gaussians classifier, without Fisher mapping of features, produced sensitivity and specificity above 90% when applied to 100x images

    Extracting fluorescent reporter time courses of cell lineages from high-throughput microscopy at low temporal resolution

    Get PDF
    Live Cell Imaging and High Throughput Screening are rapidly evolving techniques and have found many applications in recent years. Modern microscopy enables the visualisation of internal changes in the cell through the use of fluorescently tagged proteins which can be targeted to specific cellular components. A system is presented here which is designed to track cells at low temporal resolution within large populations, and to extract fluorescence data which allows relative expression rates of tagged proteins to be monitored. Cell detection and tracking are performed as separate steps, and several methods are evaluated for suitability using timeseries images of Hoechst-stained C2C12 mouse mesenchymal stem cells. The use of Hoechst staining ensures cell nuclei are visible throughout a time-series. Dynamic features, including a characteristic change in Hoechst fluorescence intensity during chromosome condensation, are used to identify cell divisions and resulting daughter cells. The ability to detect cell division is integrated into the tracking, aiding lineage construction. To establish the efficiency of the method, synthetic cell images have been produced and used to evaluate cell detection accuracy. A validation framework is created which allows the accuracy of the automatic segmentation and tracking systems to be measured and compared against existing state of the art software, such as CellProfiler. Basic tracking methods, including nearest-neighbour and cell-overlap, are provided as a baseline to evaluate the performance of more sophisticated methods. The software is demonstrated on a number of biological systems, starting with a study of different control elements of the Msx1 gene, which regulates differentiation of mesenchymal stem cells. Expression is followed through multiple lineages to identify asymmetric divisions which may be due to cell differentiation. The lineage construction methods are applied to Schizosaccharomyces pombe time-series image data, allowing the extraction of generation lengths for individual cells. Finally a study is presented which examines correlations between the circadian and cell cycles. This makes use of the recently developed FUCCI cell cycle markers which, when used in conjunction with a circadian indicator such as Rev-erbα-Venus, allow simultaneous measurements of both cycles

    Image Processing and Simulation Toolboxes of Microscopy Images of Bacterial Cells

    Get PDF
    Recent advances in microscopy imaging technology have allowed the characterization of the dynamics of cellular processes at the single-cell and single-molecule level. Particularly in bacterial cell studies, and using the E. coli as a case study, these techniques have been used to detect and track internal cell structures such as the Nucleoid and the Cell Wall and fluorescently tagged molecular aggregates such as FtsZ proteins, Min system proteins, inclusion bodies and all the different types of RNA molecules. These studies have been performed with using multi-modal, multi-process, time-lapse microscopy, producing both morphological and functional images. To facilitate the finding of relationships between cellular processes, from small-scale, such as gene expression, to large-scale, such as cell division, an image processing toolbox was implemented with several automatic and/or manual features such as, cell segmentation and tracking, intra-modal and intra-modal image registration, as well as the detection, counting and characterization of several cellular components. Two segmentation algorithms of cellular component were implemented, the first one based on the Gaussian Distribution and the second based on Thresholding and morphological structuring functions. These algorithms were used to perform the segmentation of Nucleoids and to identify the different stages of FtsZ Ring formation (allied with the use of machine learning algorithms), which allowed to understand how the temperature influences the physical properties of the Nucleoid and correlated those properties with the exclusion of protein aggregates from the center of the cell. Another study used the segmentation algorithms to study how the temperature affects the formation of the FtsZ Ring. The validation of the developed image processing methods and techniques has been based on benchmark databases manually produced and curated by experts. When dealing with thousands of cells and hundreds of images, these manually generated datasets can become the biggest cost in a research project. To expedite these studies in terms of time and lower the cost of the manual labour, an image simulation was implemented to generate realistic artificial images. The proposed image simulation toolbox can generate biologically inspired objects that mimic the spatial and temporal organization of bacterial cells and their processes, such as cell growth and division and cell motility, and cell morphology (shape, size and cluster organization). The image simulation toolbox was shown to be useful in the validation of three cell tracking algorithms: Simple Nearest-Neighbour, Nearest-Neighbour with Morphology and DBSCAN cluster identification algorithm. It was shown that the Simple Nearest-Neighbour still performed with great reliability when simulating objects with small velocities, while the other algorithms performed better for higher velocities and when there were larger clusters present

    Automatically Improving Cell Segmentation in Time-Lapse Microscopy Images Using Temporal Context From Tracking and Lineaging

    Get PDF
    Over the past decade biologists and microscopists have produced truly amazing movies, showing in wonderful detail the dynamics of living cells and subcellular structures. Access to this degree of detail in living cells is a key aspect of current biological research. This wealth of data and potential discovery is constrained by a lack of software tools. The standard approach to biological image analysis begins with segmentation to identify individual cells, tracking to maintain cellular identities over time, and lineaging to identify parent-daughter relationships. This thesis presents new algorithms for improving the segmentation, tracking and lineaging of live cell time-lapse microscopy images. A new ''segmentation from lineage'' algorithm feeds lineage or other high-level behavioral information back into segmentation algorithms along with temporal context provided by the multitemporal association tracker to create a powerful iterative learning algorithm that significantly improves segmentation and tracking results. A tree inference algorithm is used to improve automated lineage generation by integrating known cellular behavior constraints as well as fluorescent signals if available. The ''learn from edits'' technique uses tracking information to propagate user corrections to automatically correct further tracking mistakes. Finally, the new pixel replication algorithm is used for accurately partitioning touching cells using elliptical shape models. These algorithms are integrated into the LEVER lineage editing and validation software, providing user interfaces for automated segmentation, tracking and lineaging, as well as the ability to easily correct the automated results. These algorithms, integrated into LEVER, have identified key behavioral differences in embryonic and adult neural stem cells. Edit-based and functional validation techniques are used to evaluate and compare the new algorithms with current state of the art segmentation and tracking approaches. All the software as well as the image data and analysis results are released under a new open source/open data model built around Gitlab and the new CloneView interactive web tool.Ph.D., Electrical Engineering -- Drexel University, 201

    New Methods to Improve Large-Scale Microscopy Image Analysis with Prior Knowledge and Uncertainty

    Get PDF
    Multidimensional imaging techniques provide powerful ways to examine various kinds of scientific questions. The routinely produced data sets in the terabyte-range, however, can hardly be analyzed manually and require an extensive use of automated image analysis. The present work introduces a new concept for the estimation and propagation of uncertainty involved in image analysis operators and new segmentation algorithms that are suitable for terabyte-scale analyses of 3D+t microscopy images

    New Methods to Improve Large-Scale Microscopy Image Analysis with Prior Knowledge and Uncertainty

    Get PDF
    Multidimensional imaging techniques provide powerful ways to examine various kinds of scientific questions. The routinely produced datasets in the terabyte-range, however, can hardly be analyzed manually and require an extensive use of automated image analysis. The present thesis introduces a new concept for the estimation and propagation of uncertainty involved in image analysis operators and new segmentation algorithms that are suitable for terabyte-scale analyses of 3D+t microscopy images.Comment: 218 pages, 58 figures, PhD thesis, Department of Mechanical Engineering, Karlsruhe Institute of Technology, published online with KITopen (License: CC BY-SA 3.0, http://dx.doi.org/10.5445/IR/1000057821

    SEGMENTATION AND INFORMATICS IN MULTIDIMENSIONAL FLUORESCENCE OPTICAL MICROSCOPY IMAGES

    Get PDF
    Recent advances in the field of optical microscopy have enabled scientists to observe and image complex biological processes across a wide range of spatial and temporal resolution, resulting in an exponential increase in optical microscopy data. Manual analysis of such large volumes of data is extremely time consuming and often impossible if the changes cannot be detected by the human eye. Naturally it is essential to design robust, accurate and high performance image processing and analysis tools to extract biologically significant results. Furthermore, the presentation of the results to the end-user, post analysis, is also an equally challenging issue, especially when the data (and/or the hypothesis) involves several spatial/hierarchical scales (e.g., tissues, cells, (sub)-nuclear components). This dissertation concentrates on a subset of such problems such as robust edge detection, automatic nuclear segmentation and selection in multi-dimensional tissue images, spatial analysis of gene localization within the cell nucleus, information visualization and the development of a computational framework for efficient and high-throughput processing of large datasets. Initially, we have developed 2D nuclear segmentation and selection algorithms which help in the development of an integrated approach for determining the preferential spatial localization of certain genes within the cell nuclei which is emerging as a promising technique for the diagnosis of breast cancer. Quantification requires accurate segmentation of 100 to 200 cell nuclei in each patient tissue sample in order to draw a statistically significant result. Thus, for large scale analysis involving hundreds of patients, manual processing is too time consuming and subjective. We have developed an integrated workflow that selects, following 2D automatic segmentation, a sub-population of accurately delineated nuclei for positioning of fluorescence in situ hybridization labeled genes of interest in tissue samples. Application of the method was demonstrated for discriminating normal and cancerous breast tissue sections based on the differential positioning of the HES5 gene. Automatic results agreed with manual analysis in 11 out of 14 cancers, all 4 normal cases and all 5 non-cancerous breast disease cases, thus showing the accuracy and robustness of the proposed approach. As a natural progression from the 2D analysis algorithms to 3D, we first developed a robust and accurate probabilistic edge detection method for 3D tissue samples since several down stream analysis procedures such as segmentation and tracking rely on the performance of edge detection. The method based on multiscale and multi-orientation steps surpasses several other conventional edge detectors in terms of its performance. Subsequently, given an appropriate edge measure, we developed an optimal graphcut-based 3D nuclear segmentation technique for samples where the cell nuclei are volume or surface labeled. It poses the problem as one of finding minimal closure in a directed graph and solves it efficiently using the maxflow-mincut algorithm. Both interactive and automatic versions of the algorithm are developed. The algorithm outperforms, in terms of three metrics that are commonly used to evaluate segmentation algorithms, a recently reported geodesic distance transform-based 3D nuclear segmentation method which in turns was reported to outperform several other popular tools that segment 3D nuclei in tissue samples. Finally, to apply some of the aforementioned methods to large microscopic datasets, we have developed a user friendly computing environment called MiPipeline which supports high throughput data analysis, data and process provenance, visual programming and seamlessly integrated information visualization of hierarchical biological data. The computational part of the environment is based on LONI Pipeline distributed computing server and the interactive information visualization makes use of several javascript based libraries to visualize an XML-based backbone file populated with essential meta-data and results
    • …
    corecore