689 research outputs found

    Learning Instance Segmentation from Sparse Supervision

    Get PDF
    Instance segmentation is an important task in many domains of automatic image processing, such as self-driving cars, robotics and microscopy data analysis. Recently, deep learning-based algorithms have brought image segmentation close to human performance. However, most existing models rely on dense groundtruth labels for training, which are expensive, time consuming and often require experienced annotators to perform the labeling. Besides the annotation burden, training complex high-capacity neural networks depends upon non-trivial expertise in the choice and tuning of hyperparameters, making the adoption of these models challenging for researchers in other fields. The aim of this work is twofold. The first is to make the deep learning segmentation methods accessible to non-specialist. The second is to address the dense annotation problem by developing instance segmentation methods trainable with limited groundtruth data. In the first part of this thesis, I bring state-of-the-art instance segmentation methods closer to non-experts by developing PlantSeg: a pipeline for volumetric segmentation of light microscopy images of biological tissues into cells. PlantSeg comes with a large repository of pre-trained models and delivers highly accurate results on a variety of samples and image modalities. We exemplify its usefulness to answer biological questions in several collaborative research projects. In the second part, I tackle the dense annotation bottleneck by introducing SPOCO, an instance segmentation method, which can be trained from just a few annotated objects. It demonstrates strong segmentation performance on challenging natural and biological benchmark datasets at a very reduced manual annotation cost and delivers state-of-the-art results on the CVPPP benchmark. In summary, my contributions enable training of instance segmentation models with limited amounts of labeled data and make these methods more accessible for non-experts, speeding up the process of quantitative data analysis

    Nuclei/Cell Detection in Microscopic Skeletal Muscle Fiber Images and Histopathological Brain Tumor Images Using Sparse Optimizations

    Get PDF
    Nuclei/Cell detection is usually a prerequisite procedure in many computer-aided biomedical image analysis tasks. In this thesis we propose two automatic nuclei/cell detection frameworks. One is for nuclei detection in skeletal muscle fiber images and the other is for brain tumor histopathological images. For skeletal muscle fiber images, the major challenges include: i) shape and size variations of the nuclei, ii) overlapping nuclear clumps, and iii) a series of z-stack images with out-of-focus regions. We propose a novel automatic detection algorithm consisting of the following components: 1) The original z-stack images are first converted into one all-in-focus image. 2) A sufficient number of hypothetical ellipses are then generated for each nuclei contour. 3) Next, a set of representative training samples and discriminative features are selected by a two-stage sparse model. 4) A classifier is trained using the refined training data. 5) Final nuclei detection is obtained by mean-shift clustering based on inner distance. The proposed method was tested on a set of images containing over 1500 nuclei. The results outperform the current state-of-the-art approaches. For brain tumor histopathological images, the major challenges are to handle significant variations in cell appearance and to split touching cells. The proposed novel automatic cell detection consists of: 1) Sparse reconstruction for splitting touching cells. 2) Adaptive dictionary learning for handling cell appearance variations. The proposed method was extensively tested on a data set with over 2000 cells. The result outperforms other state-of-the-art algorithms with F1 score = 0.96

    Deep Networks for Image-based Cell Counting

    Get PDF
    Cell counting from 2D images and 3D volumes is critical to a wide range of research in biology, medicine, and bioinformatics, among other fields. Current approaches to cell counting have two major limitations: 1) lack of a universal model to handle different cell types, especially in 3D, and 2) reliance on costly labeled data. This dissertation addresses these two issues. First, we present a unified framework for various cell types in both 2D and 3D by leveraging recent advances in deep learning. Specifically, we develop SAU-Net by expanding the segmentation network U-Net with a self-attention module and an extension of Batch Normalization (BN) to simplify the training process for small datasets. In addition, the proposed BN extension is empirically validated on multiple image classification benchmarks, highlighting its versatile nature. SAU-Net uses dot annotations with inverse distance kernel instead of full (whole-cell) annotations as in conventional methods, dramatically reducing the labeling time while maintaining performance. Second, we take advantage of unlabeled data by self-supervised learning with a novel focal consistency loss, designed for our pixel-wise task. This learning paradigm allows a further significant reduction of reliance on labeled data with state-of-the-art results. These two contributions complement each other. Finally, we introduce a labeling tool for dot annotations to expedite the labeling process and a 3D cell counting benchmark with dot annotations to spur further research in this direction.Doctor of Philosoph

    Cell Motility Dynamics: A Novel Segmentation Algorithm to Quantify Multi-Cellular Bright Field Microscopy Images

    Get PDF
    Confocal microscopy analysis of fluorescence and morphology is becoming the standard tool in cell biology and molecular imaging. Accurate quantification algorithms are required to enhance the understanding of different biological phenomena. We present a novel approach based on image-segmentation of multi-cellular regions in bright field images demonstrating enhanced quantitative analyses and better understanding of cell motility. We present MultiCellSeg, a segmentation algorithm to separate between multi-cellular and background regions for bright field images, which is based on classification of local patches within an image: a cascade of Support Vector Machines (SVMs) is applied using basic image features. Post processing includes additional classification and graph-cut segmentation to reclassify erroneous regions and refine the segmentation. This approach leads to a parameter-free and robust algorithm. Comparison to an alternative algorithm on wound healing assay images demonstrates its superiority. The proposed approach was used to evaluate common cell migration models such as wound healing and scatter assay. It was applied to quantify the acceleration effect of Hepatocyte growth factor/scatter factor (HGF/SF) on healing rate in a time lapse confocal microscopy wound healing assay and demonstrated that the healing rate is linear in both treated and untreated cells, and that HGF/SF accelerates the healing rate by approximately two-fold. A novel fully automated, accurate, zero-parameters method to classify and score scatter-assay images was developed and demonstrated that multi-cellular texture is an excellent descriptor to measure HGF/SF-induced cell scattering. We show that exploitation of textural information from differential interference contrast (DIC) images on the multi-cellular level can prove beneficial for the analyses of wound healing and scatter assays. The proposed approach is generic and can be used alone or alongside traditional fluorescence single-cell processing to perform objective, accurate quantitative analyses for various biological applications

    Deep learning for automatic microscopy image analysis

    Get PDF
    Microscopy imaging techniques allow for the creation of detailed images of cells (or nuclei) and have been widely employed for cell studies in biological research and disease diagnosis in clinic practices.Microscopy image analysis (MIA), with tasks of cell detection, cell classification, and cell counting, etc., can assist with the quantitative analysis of cells and provide useful information for a cellular-level understanding of biological activities and pathology. Manual MIA is tedious, time-consuming, prone to subject errors, and are not feasible for the high-throughput cell analysis process. Thus, automatic MIA methods can facilitate all kinds of biological studies and clinical tasks. Conventional feature engineering-based methods use handcrafted features to address MIA problems, but their performances are generally limited since the handcrafted features can lack feature diversity as well as relevancy to specific tasks. In recent years, deep learning, especially convolutional neuronal networks (CNNs), have shown promising performances on MIA tasks, due to their strong ability to automatically learn task-specific features directly from images in an end-to-end learning manner. However, there still remains a large gap between deep learning algorithms shown to be successful on retrospective datasets and those translated to clinical and biological practice. The major challenges for the application of deep learning into practical MIA problems include: (1) MIA tasks themselves are challenging due to limited image quality, the ambiguous appearance of inter-class nuclei, occluded cells, low cell specificity, and imaging artifacts; (2) training a learning algorithm is very challenging due to the potential gradient vanishing issue and the limited availability of annotated images. In this thesis, we investigate and propose deep learning methods for three challenging MIA tasks: cell counting, multi-class nuclei segmentation, and 3D phase-to-fluorescent image translation. We demonstrate the effectiveness of the proposed methods by intensively evaluating them on practical MIA problems. The proposed methods show superior performances compared to competitive state-of-the-art methods. Experimental results demonstrated that the proposed methods hold great promise to be applied in practical biomedical applications
    • …
    corecore