12 research outputs found

    Segmentation of roots in soil with U-Net

    Get PDF
    Demonstration of the feasibility of a U-Net based CNN system for segmenting images of roots in soil and for replacing the manual line-intersect method

    RootPainter: Deep Learning Segmentation of Biological Images with Corrective Annotation

    Get PDF
    We present RootPainter, a GUI-based software tool for the rapid training of deep neural networks for use in biological image analysis. RootPainter facilitates both fully-automatic and semiautomatic image segmentation. We investigate the effectiveness of RootPainter using three plant image datasets, evaluating its potential for root length extraction from chicory roots in soil, biopore counting and root nodule counting from scanned roots. We also use RootPainter to compare dense annotations to corrective ones which are added during the training based on the weaknesses of the current model

    RootPainter3D: Interactive-machine-learning enables rapid and accurate contouring for radiotherapy

    Full text link
    Organ-at-risk contouring is still a bottleneck in radiotherapy, with many deep learning methods falling short of promised results when evaluated on clinical data. We investigate the accuracy and time-savings resulting from the use of an interactive-machine-learning method for an organ-at-risk contouring task. We compare the method to the Eclipse contouring software and find strong agreement with manual delineations, with a dice score of 0.95. The annotations created using corrective-annotation also take less time to create as more images are annotated, resulting in substantial time savings compared to manual methods, with hearts that take 2 minutes and 2 seconds to delineate on average, after 923 images have been delineated, compared to 7 minutes and 1 seconds when delineating manually. Our experiment demonstrates that interactive-machine-learning with corrective-annotation provides a fast and accessible way for non computer-scientists to train deep-learning models to segment their own structures of interest as part of routine clinical workflows. Source code is available at \href{https://github.com/Abe404/RootPainter3D}{this HTTPS URL}

    Automatic Asbestos Control Using Deep Learning Based Computer Vision System

    Full text link
    The paper discusses the results of the research and development of an innovative deep learning-based computer vision system for the fully automatic asbestos content (productivity) estimation in rock chunk (stone) veins in an open pit and within the time comparable with the work of specialists (about 10 min per one open pit processing place). The discussed system is based on the applying of instance and semantic segmentation of artificial neural networks. The Mask R-CNN-based network architecture is applied to the asbestos-containing rock chunks searching images of an open pit. The U-Net-based network architecture is applied to the segmentation of asbestos veins in the images of selected rock chunks. The designed system allows an automatic search and takes images of the asbestos rocks in an open pit in the near-infrared range (NIR) and processes the obtained images. The result of the system work is the average asbestos content (productivity) estimation for each controlled open pit. It is validated to estimate asbestos content as the graduated average ratio of the vein area value to the selected rock chunk area value, both determined by the trained neural network. For both neural network training tasks the training, validation, and test datasets are collected. The designed system demonstrates an error of about 0.4% under different weather conditions in an open pit when the asbestos content is about 1.5–4%. The obtained accuracy is sufficient to use the system as a geological service tool instead of currently applied visual-based estimations. © 2021 by the authors. Licensee MDPI, Basel, Switzerland

    Uncovering natural variation in root system architecture and growth dynamics using a robotics-assisted phenomics platform

    Get PDF
    The plant kingdom contains a stunning array of complex morphologies easily observed above-ground, but more challenging to visualize below-ground. Understanding the magnitude of diversity in root distribution within the soil, termed root system architecture (RSA), is fundamental in determining how this trait contributes to species adaptation in local environments. Roots are the interface between the soil environment and the shoot system and therefore play a key role in anchorage, resource uptake, and stress resilience. Previously, we presented the GLO-Roots (Growth and Luminescence Observatory for Roots) system to study the RSA of soil-grown Arabidopsis thaliana plants from germination to maturity (Rellán-Álvarez et al., 2015). In this study, we present the automation of GLO-Roots using robotics and the development of image analysis pipelines in order to examine the temporal dynamic regulation of RSA and the broader natural variation of RSA in Arabidopsis, over time. These datasets describe the developmental dynamics of two independent panels of accessions and reveal highly complex and polygenic RSA traits that show significant correlation with climate variables of the accessions’ respective origins

    New Interactive Machine Learning Tool for Marine Image Analysis

    Get PDF
    We would like to thank the Lofoten Vesterålen Ocean Observatory, and specifically Geir Pedersen,for supplying much of the data used in this study. We would also like to express gratitude to the insightfulcomments made during the review of this manuscript and the efforts of the editorial team during its publication.Peer reviewe

    Geometric Algorithms for Modeling Plant Roots from Images

    Get PDF
    Roots, considered as the ”hidden half of the plant”, are essential to a plant’s health and pro- ductivity. Understanding root architecture has the potential to enhance efforts towards im- proving crop yield. In this dissertation we develop geometric approaches to non-destructively characterize the full architecture of the root system from 3D imaging while making com- putational advances in topological optimization. First, we develop a global optimization algorithm to remove topological noise, with applications in both root imaging and com- puter graphics. Second, we use our topology simplification algorithm, other methods from computer graphics, and customized algorithms to develop a high-throughput pipeline for computing hierarchy and fine-grained architectural traits from 3D imaging of maize roots. Finally, we develop an algorithm for consistently simplifying the topology of nested shapes, with a motivating application in temporal root system analysis. Along the way, we con- tribute to the computer graphics community a pair of topological simplification algorithms both for repairing a single 3D shape and for repairing a sequence of nested shapes

    Automatic Bone Structure Segmentation of Under-Sampled CT/FLT-PET Volumes for HSCT Patients

    Get PDF
    In this thesis I present a pipeline for the instance segmentation of vertebral bodies from joint CT/FLT-PET image volumes that have been purposefully under-sampled along the axial direction to limit radiation exposure to vulnerable HSCT patients. The under-sampled image data makes the segmentation of individual vertebral bodies a challenging task, as the boundaries between the vertebrae in the thoracic and cervical spine regions are not well resolved in the CT modality, escaping detection by both humans and algorithms. I train a multi-view, multi-class U-Net to perform semantic segmentation of the vertebral body, sternum, and pelvis object classes. These bone structures contain marrow cavities that, when viewed in the FLT-PET modality, allow us to investigate hematopoietic cellular proliferation in HSCT patients non-invasively. The proposed convnet model achieves a Dice score of 0.9245 for the vertebral body object class and shows qualitatively similar performance on the pelvis and sternum object classes. The final instance segmentation is realized by combining the initial vertebral body semantic segmentation with the associated FLT-PET image data, where the vertebral boundaries become well-resolved by the 28th day post-transplant. The vertebral boundary detection algorithm is a hand-crafted spatial filter that enforces vertebra span as an anatomical prior, and it performs similar to a human for the detection of all but one vertebral boundary in the entirety of the HSCT patient dataset. In addition to the segmentation model, I propose, design, and test a “drop-in” replacement up-sampling module that allows state-of-the-art super-resolution convnets to be used for purely asymmetric upscaling tasks (tasks where only one image dimension is scaled while the other is held to unity). While the asymmetric SR convnet I develop falls short of the initial goal, where it was to be used to enhance the unresolved vertebral boundaries of the under-sampled CT image data, it does objectively upscale medical image data more accurately than naïve interpolation methods and may be useful as a pre-processing step for other medical imaging tasks involving anisotropic pixels or voxels
    corecore