7 research outputs found

    DeepBacs for multi-task bacterial image analysis using open-source deep learning approaches

    Get PDF
    This work demonstrates and guides how to use a range of state-of-the-art artificial neural-networks to analyse bacterial microscopy images using the recently developed ZeroCostDL4Mic platform. We generated a database of image datasets used to train networks for various image analysis tasks and present strategies for data acquisition and curation, as well as model training. We showcase different deep learning (DL) approaches for segmenting bright field and fluorescence images of different bacterial species, use object detection to classify different growth stages in time-lapse imaging data, and carry out DL-assisted phenotypic profiling of antibiotic-treated cells. To also demonstrate the ability of DL to enhance low-phototoxicity live-cell microscopy, we showcase how image denoising can allow researchers to attain high-fidelity data in faster and longer imaging. Finally, artificial labelling of cell membranes and predictions of super-resolution images allow for accurate mapping of cell shape and intracellular targets. Our purposefully-built database of training and testing data aids in novice users’ training, enabling them to quickly explore how to analyse their data through DL. We hope this lays a fertile ground for the efficient application of DL in microbiology and fosters the creation of tools for bacterial cell biology and antibiotic research

    DeepBacs for multi-task bacterial image analysis using open-source deep learning approaches

    Get PDF
    This work demonstrates and guides how to use a range of state-of-the-art artificial neural-networks to analyse bacterial microscopy images using the recently developed ZeroCostDL4Mic platform. We generated a database of image datasets used to train networks for various image analysis tasks and present strategies for data acquisition and curation, as well as model training. We showcase different deep learning (DL) approaches for segmenting bright field and fluorescence images of different bacterial species, use object detection to classify different growth stages in time-lapse imaging data, and carry out DL-assisted phenotypic profiling of antibiotic-treated cells. To also demonstrate the ability of DL to enhance low-phototoxicity live-cell microscopy, we showcase how image denoising can allow researchers to attain high-fidelity data in faster and longer imaging. Finally, artificial labelling of cell membranes and predictions of super-resolution images allow for accurate mapping of cell shape and intracellular targets. Our purposefully-built database of training and testing data aids in novice users' training, enabling them to quickly explore how to analyse their data through DL. We hope this lays a fertile ground for the efficient application of DL in microbiology and fosters the creation of tools for bacterial cell biology and antibiotic research.DeepBacs guides users without expertise in machine learning methods to leverage state-of-the-art artificial neural networks to analyse bacterial microscopy images

    The Cell Tracking Challenge: 10 years of objective benchmarking

    Get PDF
    The Cell Tracking Challenge is an ongoing benchmarking initiative that has become a reference in cell segmentation and tracking algorithm development. Here, we present a signifcant number of improvements introduced in the challenge since our 2017 report. These include the creation of a new segmentation-only benchmark, the enrichment of the dataset repository with new datasets that increase its diversity and complexity, and the creation of a silver standard reference corpus based on the most competitive results, which will be of particular interest for data-hungry deep learning-based strategies. Furthermore, we present the up-to-date cell segmentation and tracking leaderboards, an in-depth analysis of the relationship between the performance of the state-of-the-art methods and the properties of the datasets and annotations, and two novel, insightful studies about the generalizability and the reusability of top-performing methods. These studies provide critical practical conclusions for both developers and users of traditional and machine learning-based cell segmentation and tracking algorithms.Web of Science2071020101

    Gut Analysis Toolbox: Training data and 2D models for segmenting enteric neurons, neuronal subtypes and ganglia

    No full text
    <p>This upload is associated with the software, <a href="https://github.com/pr4deepr/GutAnalysisToolbox">Gut Analysis Toolbox</a> (GAT).</p> <p>It contains<strong> StarDist models for segmenting enteric neurons in 2D, enteric neuronal subtypes in 2D and UNet model for enteric ganglia in 2D in gut wholemount tissue.</strong> GAT is implemented in Fiji, but the models can be used in any software that supports StarDist and the use of 2D UNet models. The files here also consist of <strong>Python notebooks (Google Colab)</strong>, training and test data as well as reports on model performance.</p> <p>The model files are located in the respective folders as zip files. The folders have also been zipped:</p> <ul> <li>Neuron (Hu; <a href="https://github.com/stardist/stardist">StarDist</a> model): <ul> <li>Main folder: 2D_enteric_neuron_model_QA.zip</li> <li>Model File:2D_enteric_neuron_v4_1.zip </li> </ul> </li> <li>Neuronal subtype (<a href="https://github.com/stardist/stardist">StarDist</a> model):  <ul> <li>Main folder: 2D_enteric_neuron_subtype_model_QA.zip</li> <li>Model File: 2D_enteric_neuron_subtype_v4.zip</li> </ul> </li> <li>Enteric ganglia (2D UNet model; Use in FIJI with <a href="https://deepimagej.github.io/deepimagej/">deepImageJ</a>) <ul> <li>Main folder: 2D_enteric_ganglia_model_QA.zip</li> <li>Model File: 2D_Ganglia_RGB_v2.bioimage.io.model.zip (Compatible with deepimageJ v3)</li> </ul> </li> </ul> <p>For the all models, files included are:</p> <ol> <li>Model for segmenting cells or ganglia in 2D FIJI. StarDist or 2D UNet.</li> <li>Training and Test datasets used for training.</li> <li>Google Colab notebooks used for training and quality assurance (<a href="https://github.com/HenriquesLab/ZeroCostDL4Mic/wiki">ZeroCost DL4Mic notebooks</a>).</li> <li>Quality assurance reports generated from above notebooks.</li> <li>StarDist model exported for use in QuPath.</li> </ol> <p>The model files can be used within can be used within the software, <a href="https://github.com/stardist/stardist">StarDist</a>. They are intended to be used within FIJI or QuPath, but can be used in any software that supports the implementation of StarDist in 2D.</p> <p><strong>Data:</strong></p> <p>All the images were collected from 4 different research labs and a public database (<a href="https://sparc.science/data?type=dataset">SPARC database</a>) to account for variations in image acquisition, sample preparation and immunolabelling.</p> <p>For enteric neurons the pan-neuronal marker, Hu has been used and the  2D wholemounts images from mouse, rat and human tissue.</p> <p>For enteric neuronal subtypes, 2D images for nNOS, MOR, DOR, ChAT, Calretinin, Calbindin, Neurofilament, CGRP and SST from mouse tissue have been used..</p> <p>25 images were used from the following entries in the <a href="https://sparc.science/data?type=dataset">SPARC database</a>:</p> <ul> <li><a href="https://doi.org/10.26275/9FFG-482D">Howard, M. (2021). 3D imaging of enteric neurons in mouse (Version 1) [Data set]. SPARC Consortium. </a></li> <li><a href="https://doi.org/10.26275/PZEK-91WX">Graham, K. D., Huerta-Lopez, S., Sengupta, R., Shenoy, A., Schneider, S., Wright, C. M., Feldman, M., Furth, E., Lemke, A., Wilkins, B. J., Naji, A., Doolin, E., Howard, M., & Heuckeroth, R. (2020). Robust 3-Dimensional visualization of human colon enteric nervous system without tissue sectioning (Version 1) [Data set]. SPARC Consortium. </a></li> </ul> <p>The images have been acquired using a combination different microscopes. The images for the mouse tissue were acquired using: </p> <ul> <li> <p>Leica TCS-SP8 confocal system (20x HC PL APO NA 1.33, 40 x HC PL APO NA 1.3) </p> </li> <li> <p>Leica TCS-SP8 lightning confocal system (20x HC PL APO NA 0.88) </p> </li> <li> <p>Zeiss Axio Imager M2 (20X HC PL APO NA 0.3) </p> </li> <li> <p>Zeiss Axio Imager Z1 (10X HC PL APO NA 0.45) </p> </li> </ul> <p>Human tissue images were acquired using: </p> <ul> <li> <p>IX71 Olympus microscope (10X HC PL APO NA 0.3) </p> </li> </ul> <p>For more information, visit: <a href="https://github.com/pr4deepr/GutAnalysisToolbox/wiki">https://github.com/pr4deepr/GutAnalysisToolbox/wiki </a></p> <p><strong>NOTE:</strong> The images for enteric neurons and neuronal subtypes have been rescaled to 0.568 µm/pixel for mouse and rat. For human neurons, it has been rescaled to 0.9 µm/pixel . This is to ensure the neuronal cell bodies have similar pixel area across images. The area of cells in pixels can vary based on resolution of image, magnification of objective used, animal species (larger animals -> larger neurons) and potentially how the tissue is stretched during wholemount preparation </p> <p>Average neuron area for neuronal model: 701.2 ± 195.9 pixel<sup>2 </sup>(Mean ± SD, 6267 cells)</p> <p>Average neuron area for neuronal subtype model: 880.9 ± 316 pixel<sup>2 </sup>(Mean ± SD, 924 cells)</p> <p><strong>Software References:</strong></p> <p><strong><a href="https://github.com/stardist/stardist">Stardist</a></strong></p> <p>Schmidt, U., Weigert, M., Broaddus, C., & Myers, G. (2018, September). Cell detection with star-convex polygons. In <em>International Conference on Medical Image Computing and Computer-Assisted Intervention</em> (pp. 265-273). Springer, Cham.</p> <p><strong><a href="https://deepimagej.github.io/deepimagej/">deepImageJ</a></strong></p> <p>Gómez-de-Mariscal, E., García-López-de-Haro, C., Ouyang, W., Donati, L., Lundberg, E., Unser, M., Muñoz-Barrutia, A. and Sage, D., 2021. DeepImageJ: A user-friendly environment to run deep learning models in ImageJ. <em>Nature Methods</em>, <em>18</em>(10), pp.1192-1195.</p> <p><strong><a href="https://github.com/HenriquesLab/ZeroCostDL4Mic/wiki">ZeroCost DL4Mic</a></strong></p> <p>von Chamier, L., Laine, R.F., Jukkala, J., Spahn, C., Krentzel, D., Nehme, E., Lerche, M., Hernández-Pérez, S., Mattila, P.K., Karinou, E. and Holden, S., 2021. Democratising deep learning for microscopy with ZeroCostDL4Mic. <em>Nature communications</em>, <em>12</em>(1), pp.1-18.</p>QuPath model exported using this library: https://github.com/onnx/tensorflow-onnx Instructions: https://github.com/qupath/qupath-extension-stardist#converting-a-tensorflow-model-for-use-with-openc

    EDAM-bioimaging: the ontology of bioimage informatics operations, topics, data, and formats (update 2020)

    No full text
    EDAM is a well-established ontology of operations, topics, types of data, and data formats that are used in bioinformatics and its neighbouring fields [1,2] . EDAM-bioimaging is an extension of EDAM dedicated to bioimage analysis, bioimage informatics, and bioimaging. It is being developed in collaboration between the ELIXIR research infrastructure and the NEUBIAS and COMULIS COST Actions, in close contact with the Euro-BioImaging research infrastructure and the Global BioImaging network. EDAM-bioimaging contains an inter-related hierarchy of concepts including bioimage analysis and related operations, bioimaging topics and technologies, and bioimage data and their formats. The modelled concepts enable interoperable descriptions of software, publications, data, workflows, and training materials, fostering open science and "reproducible" bioimage analysis. New developments in EDAM-bioimaging at the time of publication [3] include among others: A concise but relatively comprehensive ontology of Machine learning, Artificial intelligence, and Clustering (to the level relevant in particular in bioimaging, biosciences, and also scientific data analysis in general) Added and refined topics and synonyms within Sample preparation and Tomography, and finalised coverage of imaging techniques (all of these to the high-level extent that influences choices of downstream analysis, i.e. the scope of EDAM) EDAM-bioimaging continues being under active development, with a growing and diversifying community of contributors. It is used in BIII.eu, the registry of bioimage analysis tools, workflows, and training materials, and emerging also in descriptions of Debian Med packages available in Debian and Bio-Linux, and tools in bio.tools. Development of EDAM-bioimaging has been carried out in a successful open community manner, in a fruitful collaboration between numerous bioimaging experts and ontology developers. The last stable release at the time of poster publication is version alpha06 [3], and the live development version can be viewed and commented on WebProtégé (free registration required). New contributors are warmly welcome! [1] Ison, J., Kalaš, M., Jonassen, I., Bolser, D., Uludag, M., McWilliam, H., Malone, J., Lopez, R., Pettifer, S. and Rice, P. (2013). EDAM: an ontology of bioinformatics operations, types of data and identifiers, topics and formats. Bioinformatics, 29(10): 1325-1332. DOI: 10.1093/bioinformatics/btt113 Open Access [2] Kalaš, M., Ménager, H., Schwämmle, V., Ison, J. and EDAM Contributors (2017). EDAM – the ontology of bioinformatics operations, types of data, topics, and data formats (2017 update) [version 1; not peer reviewed]. F1000Research, 6(ISCB Comm J):1181 (Poster) DOI: 10.7490/f1000research.1114459.1 Open Access [3] Matúš Kalaš, Laure Plantard, Martin Jones, Nataša Sladoje, Marie-Charlotte Domart, Matthia Karreman, Arrate Muñoz-Barrutia, Raf Van de Plas, Ivana Vrhovac Madunić, Dean Karaica, Laura Nicolás Sáenz, Estibaliz Gómez de Marisca, Daniel Sage, Robert Haase Joakim Lindblad, and all contributors to previous versions (2020). edamontology/edam-bioimaging: alpha06 (Version alpha06). Zenodo. DOI: 10.5281/zenodo.3695725 Open Acces
    corecore