5 research outputs found

    Three Dimensional Fluorescence Microscopy Image Synthesis and Segmentation

    Get PDF
    Advances in fluorescence microscopy enable acquisition of 3D image volumes with better image quality and deeper penetration into tissue. Segmentation is a required step to characterize and analyze biological structures in the images and recent 3D segmentation using deep learning has achieved promising results. One issue is that deep learning techniques require a large set of groundtruth data which is impractical to annotate manually for large 3D microscopy volumes. This paper describes a 3D deep learning nuclei segmentation method using synthetic 3D volumes for training. A set of synthetic volumes and the corresponding groundtruth are generated using spatially constrained cycle-consistent adversarial networks. Segmentation results demonstrate that our proposed method is capable of segmenting nuclei successfully for various data sets

    DeepSynth: Three-dimensional nuclear segmentation of biological images using neural networks trained with synthetic data

    Get PDF
    The scale of biological microscopy has increased dramatically over the past ten years, with the development of new modalities supporting collection of high-resolution fluorescence image volumes spanning hundreds of microns if not millimeters. The size and complexity of these volumes is such that quantitative analysis requires automated methods of image processing to identify and characterize individual cells. For many workflows, this process starts with segmentation of nuclei that, due to their ubiquity, ease-of-labeling and relatively simple structure, make them appealing targets for automated detection of individual cells. However, in the context of large, three-dimensional image volumes, nuclei present many challenges to automated segmentation, such that conventional approaches are seldom effective and/or robust. Techniques based upon deep-learning have shown great promise, but enthusiasm for applying these techniques is tempered by the need to generate training data, an arduous task, particularly in three dimensions. Here we present results of a new technique of nuclear segmentation using neural networks trained on synthetic data. Comparisons with results obtained using commonly-used image processing packages demonstrate that DeepSynth provides the superior results associated with deep-learning techniques without the need for manual annotation

    Developing a User-Friendly and Modular Framework for Deep Learning Methods in 3D Bioimage Segmentation

    Get PDF
    The emergence of deep learning has breathed new life into image analysis, especially for the segmentation, a challenging step required to quantify bidimensional (2D) and tridimensional (3D) objects. Despite deep learning promises, these methods are only slowly spreading in the biological field. In this PhD project, the 3D nucleus of the cell is used as the object of interest to understand how its shape variations contribute to the organisation of the genetic material. First a literature survey showed that very few publicly available methods for 3D nucleus segmentation provide the minimum requirements for their reproducibility. These methods were subsequently benchmarked and only one of them called nnU-Net surpassed the best specialized computer vision tool. Based on these observations, a new development philosophy was designed and, from it, Biom3d, a novel deep learning framework emerged. Biom3d is a user-friendly tool successfully used by biologists involved in 3D nucleus segmentation and provides a new alternative for automatically and accurately computing nuclear shape parameters. Being well optimized, Biom3d also surpasses the performance of cutting-edge methods on a wide variety of biological and medical segmentation problems. Being modular, Biom3d is a sustainable framework compatible with the latest deep learning innovations, such as self-supervised methods. Self-supervision aims at tackling the important need for deep learning methods in manual annotations by pretraining models on large unannotated datasets to extract information first before retraining them on annotated datasets. In this work, a self-supervised approach based on pretraining an entire U-Net model with the Triplet and Arcface losses was developed and demonstrates significant improvements over supervised methods for 3D segmentation. The performance, modularity and interdisciplinary nature of the tools developed during this project will serve as an innovation platform for a wide panel of users ranging from biologist users to future deep learning developers

    Proceedings - 29. Workshop Computational Intelligence, Dortmund, 28. - 29. November 2019

    Get PDF
    Dieser Tagungsband enthält die Beiträge des 29. Workshops Computational Intelligence. Die Schwerpunkte sind Methoden, Anwendungen und Tools für Fuzzy-Systeme, Künstliche Neuronale Netze, Evolutionäre Algorithmen und Data-Mining-Verfahren sowie der Methodenvergleich anhand von industriellen und Benchmark-Problemen
    corecore