192 research outputs found

    A convolutional autoencoder approach for mining features in cellular electron cryo-tomograms and weakly supervised coarse segmentation

    Full text link
    Cellular electron cryo-tomography enables the 3D visualization of cellular organization in the near-native state and at submolecular resolution. However, the contents of cellular tomograms are often complex, making it difficult to automatically isolate different in situ cellular components. In this paper, we propose a convolutional autoencoder-based unsupervised approach to provide a coarse grouping of 3D small subvolumes extracted from tomograms. We demonstrate that the autoencoder can be used for efficient and coarse characterization of features of macromolecular complexes and surfaces, such as membranes. In addition, the autoencoder can be used to detect non-cellular features related to sample preparation and data collection, such as carbon edges from the grid and tomogram boundaries. The autoencoder is also able to detect patterns that may indicate spatial interactions between cellular components. Furthermore, we demonstrate that our autoencoder can be used for weakly supervised semantic segmentation of cellular components, requiring a very small amount of manual annotation.Comment: Accepted by Journal of Structural Biolog

    Learning Instance Segmentation from Sparse Supervision

    Get PDF
    Instance segmentation is an important task in many domains of automatic image processing, such as self-driving cars, robotics and microscopy data analysis. Recently, deep learning-based algorithms have brought image segmentation close to human performance. However, most existing models rely on dense groundtruth labels for training, which are expensive, time consuming and often require experienced annotators to perform the labeling. Besides the annotation burden, training complex high-capacity neural networks depends upon non-trivial expertise in the choice and tuning of hyperparameters, making the adoption of these models challenging for researchers in other fields. The aim of this work is twofold. The first is to make the deep learning segmentation methods accessible to non-specialist. The second is to address the dense annotation problem by developing instance segmentation methods trainable with limited groundtruth data. In the first part of this thesis, I bring state-of-the-art instance segmentation methods closer to non-experts by developing PlantSeg: a pipeline for volumetric segmentation of light microscopy images of biological tissues into cells. PlantSeg comes with a large repository of pre-trained models and delivers highly accurate results on a variety of samples and image modalities. We exemplify its usefulness to answer biological questions in several collaborative research projects. In the second part, I tackle the dense annotation bottleneck by introducing SPOCO, an instance segmentation method, which can be trained from just a few annotated objects. It demonstrates strong segmentation performance on challenging natural and biological benchmark datasets at a very reduced manual annotation cost and delivers state-of-the-art results on the CVPPP benchmark. In summary, my contributions enable training of instance segmentation models with limited amounts of labeled data and make these methods more accessible for non-experts, speeding up the process of quantitative data analysis

    Deep learning approach to Fourier ptychographic microscopy

    Full text link
    Convolutional neural networks (CNNs) have gained tremendous success in solving complex inverse problems. The aim of this work is to develop a novel CNN framework to reconstruct video sequences of dynamic live cells captured using a computational microscopy technique, Fourier ptychographic microscopy (FPM). The unique feature of the FPM is its capability to reconstruct images with both wide field-of-view (FOV) and high resolution, i.e. a large space-bandwidth-product (SBP), by taking a series of low resolution intensity images. For live cell imaging, a single FPM frame contains thousands of cell samples with different morphological features. Our idea is to fully exploit the statistical information provided by these large spatial ensembles so as to make predictions in a sequential measurement, without using any additional temporal dataset. Specifically, we show that it is possible to reconstruct high-SBP dynamic cell videos by a CNN trained only on the first FPM dataset captured at the beginning of a time-series experiment. Our CNN approach reconstructs a 12800×10800 pixel phase image using only ∼25 seconds, a 50× speedup compared to the model-based FPM algorithm. In addition, the CNN further reduces the required number of images in each time frame by ∼ 6×. Overall, this significantly improves the imaging throughput by reducing both the acquisition and computational times. The proposed CNN is based on the conditional generative adversarial network (cGAN) framework. We further propose a mixed loss function that combines the standard image domain loss and a weighted Fourier domain loss, which leads to improved reconstruction of the high frequency information. Additionally, we also exploit transfer learning so that our pre-trained CNN can be further optimized to image other cell types. Our technique demonstrates a promising deep learning approach to continuously monitor large live-cell populations over an extended time and gather useful spatial and temporal information with sub-cellular resolution.We would like to thank NVIDIA Corporation for supporting us with the GeForce Titan Xp through the GPU Grant Program. (NVIDIA Corporation; GeForce Titan Xp through the GPU Grant Program)First author draf

    Deep learning approach to Fourier ptychographic microscopy

    Full text link
    Convolutional neural networks (CNNs) have gained tremendous success in solving complex inverse problems. The aim of this work is to develop a novel CNN framework to reconstruct video sequence of dynamic live cells captured using a computational microscopy technique, Fourier ptychographic microscopy (FPM). The unique feature of the FPM is its capability to reconstruct images with both wide field-of-view (FOV) and high resolution, i.e. a large space-bandwidth-product (SBP), by taking a series of low resolution intensity images. For live cell imaging, a single FPM frame contains thousands of cell samples with different morphological features. Our idea is to fully exploit the statistical information provided by this large spatial ensemble so as to make predictions in a sequential measurement, without using any additional temporal dataset. Specifically, we show that it is possible to reconstruct high-SBP dynamic cell videos by a CNN trained only on the first FPM dataset captured at the beginning of a time-series experiment. Our CNN approach reconstructs a 12800X10800 pixels phase image using only ~25 seconds, a 50X speedup compared to the model-based FPM algorithm. In addition, the CNN further reduces the required number of images in each time frame by ~6X. Overall, this significantly improves the imaging throughput by reducing both the acquisition and computational times. The proposed CNN is based on the conditional generative adversarial network (cGAN) framework. Additionally, we also exploit transfer learning so that our pre-trained CNN can be further optimized to image other cell types. Our technique demonstrates a promising deep learning approach to continuously monitor large live-cell populations over an extended time and gather useful spatial and temporal information with sub-cellular resolution

    Deep Learning-Based Particle Detection and Instance Segmentation for Microscopy Images

    Get PDF
    Bildgebende mikroskopische Verfahren ermöglichen Forschern, Einblicke in komplexe, bisher unverstandene Prozesse zu gewinnen. Um den Forschern den Weg zu neuen Erkenntnissen zu erleichtern, sind hoch-automatisierte, vielseitige, genaue, benutzerfreundliche und zuverlässige Methoden zur Partikeldetektion und Instanzsegmentierung erforderlich. Diese Methoden sollten insbesondere für unterschiedliche Bildgebungsbedingungen und Anwendungen geeignet sein, ohne dass Expertenwissen für Anpassungen erforderlich ist. Daher werden in dieser Arbeit eine neue auf Deep Learning basierende Methode zur Partikeldetektion und zwei auf Deep Learning basierende Methoden zur Instanzsegmentierung vorgestellt. Der Partikeldetektionsansatz verwendet einen von der Partikelgröße abhängigen Hochskalierungs-Schritt und ein U-Net Netzwerk für die semantische Segmentierung von Partikelmarkern. Nach der Validierung der Hochskalierung mit synthetisch erzeugten Daten wird die Partikeldetektionssoftware BeadNet vorgestellt. Die Ergebnisse auf einem Datensatz mit fluoreszierenden Latex-Kügelchen zeigen, dass BeadNet Partikel genauer als traditionelle Methoden detektieren kann. Die beiden neuen Instanzsegmentierungsmethoden verwenden ein U-Net Netzwerk mit zwei Decodern und werden für vier Objektarten und drei Mikroskopie-Bildgebungsverfahren evaluiert. Für die Evaluierung werden ein einzelner nicht balancierter Trainingsdatensatz und ein einzelner Satz von Postprocessing-Parametern verwendet. Danach wird die bessere Methode in der Cell Tracking Challenge weiter validiert, wobei mehrere Top-3-Platzierungen und für sechs Datensätze eine mit einem menschlichen Experten vergleichbare Leistung erreicht werden. Außerdem wird die neue Instanzsegmentierungssoftware microbeSEG vorgestellt. microbeSEG verwendet, analog zu BeadNet, OMERO für die Datenverwaltung und bietet Funktionen für die Erstellung von Trainingsdaten, das Trainieren von Modellen, die Modellevaluation und die Modellanwendung. Die qualitativen Anwendungen von BeadNet und microbeSEG zeigen, dass beide Tools eine genaue Auswertung vieler verschiedener Mikroskopie-Bilddaten ermöglichen. Abschließend gibt diese Dissertation einen Ausblick auf den Bedarf an weiteren Richtlinien für Bildanalyse-Wettbewerbe und Methodenvergleiche für eine zielgerichtete zukünftige Methodenentwicklung
    corecore