422 research outputs found

    Automatic quantification of mammary glands on non-contrast X-ray CT by using a novel segmentation approach

    Get PDF
    ABSTRACT This paper describes a brand new automatic segmentation method for quantifying volume and density of mammary gland regions on non-contrast CT images. The proposed method uses two processing steps: (1) breast region localization, and (2) breast region decomposition to accomplish a robust mammary gland segmentation task on CT images. The first step detects two minimum bounding boxes of left and right breast regions, respectively, based on a machine-learning approach that adapts to a large variance of the breast appearances on different age levels. The second step divides the whole breast region in each side into mammary gland, fat tissue, and other regions by using spectral clustering technique that focuses on intra-region similarities of each patient and aims to overcome the image variance caused by different scan-parameters. The whole approach is designed as a simple structure with very minimum number of parameters to gain a superior robustness and computational efficiency for real clinical setting. We applied this approach to a dataset of 300 CT scans, which are sampled with the equal number from 30 to 50 years-old-women. Comparing to human annotations, the proposed approach can measure volume and quantify distributions of the CT numbers of mammary gland regions successfully. The experimental results demonstrated that the proposed approach achieves results consistent with manual annotations. Through our proposed framework, an efficient and effective low cost clinical screening scheme may be easily implemented to predict breast cancer risk, especially on those already acquired scans

    Vertebral Compression Fracture Detection With Novel 3D Localisation

    Full text link
    Vertebral compression fractures (VCF) often go undetected in radiology images, potentially leading to secondary fractures and permanent disability or even death. The objective of this thesis is to develop a fully automated method for detecting VCF in incidental CT images acquired for other purposes, thereby facilitating better follow up and treatment. The proposed approach is based on 3D localisation in CT images, followed by VCF detection in the localised regions. The 3D localisation algorithm combines deep reinforcement learning (DRL) with imitation learning (IL) to extract thoracic / lumbar spine regions from chest / abdomen CT scans. The algorithm generates six bounding boxes as Regions of Interest (ROI) using three different CNN models, with an average Jaccard Index (JI)/Dice Coefficient (DC) of 74.21%/84.71%. The extracted ROI were then divided into slices and the slices into patches to train four convolutional neural network (CNN) models for VCF detection at the patch level. The predictions from the patches were aggregated at bounding box level, and majority voting performed to decide on the presence / absence of VCF for a patient. The best performing model was a six layered CNN, which together with majority voting achieved threefold cross validation accuracy / F1 Score of 85.95% / 85.94% from 308 chest scans. The same model also achieved a fivefold cross validation accuracy / F1 score of 86.67% / 87.04% from 168 abdomen scans. Because of the success of the 3D localisation algorithm, it was also trained on other abdominal organs, namely the spleen and left and right kidneys, with promising results. The 3D localisation algorithm was enhanced to work with fused bounding boxes and also in semi-supervised mode to address the problem of annotation time by radiologists. Experiments using three different proportions of labelled and unlabelled data achieved fairly good performance, although not as good as the fully supervised equivalents. Finally, VCF detection in a weakly supervised multiple instance learning (MIL) setting was performed to reduce radiologists’ time for annotations, together with majority voting on the six bounding boxes. The best performing model was the six layered CNN which achieved threefold cross validation accuracy / F1 score of 81.05% / 80.74 % on 308 thoracic scans, and fivefold cross validation accuracy / F1 Score of 85.45% / 86.61% on 168 abdomen scans. Overall, the results are comparable to the state-of the art that used an order of magnitude more scans

    Deep Learning in Medical Image Analysis

    Get PDF
    The accelerating power of deep learning in diagnosing diseases will empower physicians and speed up decision making in clinical environments. Applications of modern medical instruments and digitalization of medical care have generated enormous amounts of medical images in recent years. In this big data arena, new deep learning methods and computational models for efficient data processing, analysis, and modeling of the generated data are crucially important for clinical applications and understanding the underlying biological process. This book presents and highlights novel algorithms, architectures, techniques, and applications of deep learning for medical image analysis

    Frameworks in medical image analysis with deep neural networks

    Get PDF
    In recent years, deep neural network based medical image analysis has become quite powerful and achieved similar results performance-wise as experts. Consequently, the integration of these tools into the clinical routine as clinical decision support systems is highly desired. The benefits of automatic image analysis for clinicians are massive, ranging from improved diagnostic as well as treatment quality to increased time-efficiency through automated structured reporting. However, implementations in the literature revealed a significant lack of standardization in pipeline building resulting in low reproducibility, high complexity through extensive knowledge requirements for building state-of-the-art pipelines, and difficulties for application in clinical research. The main objective of this work is the standardization of pipeline building in deep neural network based medical image segmentation and classification. This is why the Python frameworks MIScnn for medical image segmentation and AUCMEDI for medical image classification are proposed which simplify the implementation process through intuitive building blocks eliminating the need for time-consuming and error-prone implementation of common components from scratch. The proposed frameworks include state-of-the-art methodology, follow outstanding open-source principles like extensive documentation as well as stability, offer rapid as well as simple application capabilities for deep learning experts as well as clinical researchers, and provide cutting-edge high-performance competitive with the strongest implementations in the literature. As secondary objectives, this work presents more than a dozen in-house studies as well as discusses various external studies utilizing the proposed frameworks in order to prove the capabilities of standardized medical image analysis. The presented studies demonstrate excellent predictive capabilities in applications ranging from COVID-19 detection in computed tomography scans to the integration into a clinical study workflow for Gleason grading of prostate cancer microscopy sections and advance the state-of-the-art in medical image analysis by simplifying experimentation setups for research. Furthermore, studies for increasing reproducibility in performance assessment of medical image segmentation are presented including an open-source metric library for standardized evaluation and a community guideline on proper metric usage. The proposed contributions in this work improve the knowledge representation of the field, enable rapid as well as high-performing applications, facilitate further research, and strengthen the reproducibility of future studies

    The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS)

    Get PDF
    In this paper we report the set-up and results of the Multimodal Brain Tumor Image Segmentation Benchmark (BRATS) organized in conjunction with the MICCAI 2012 and 2013 conferences. Twenty state-of-the-art tumor segmentation algorithms were applied to a set of 65 multi-contrast MR scans of low-and high-grade glioma patients-manually annotated by up to four raters-and to 65 comparable scans generated using tumor image simulation software. Quantitative evaluations revealed considerable disagreement between the human raters in segmenting various tumor sub-regions (Dice scores in the range 74%-85%), illustrating the difficulty of this task. We found that different algorithms worked best for different sub-regions (reaching performance comparable to human inter-rater variability), but that no single algorithm ranked in the top for all sub-regions simultaneously. Fusing several good algorithms using a hierarchical majority vote yielded segmentations that consistently ranked above all individual algorithms, indicating remaining opportunities for further methodological improvements. The BRATS image data and manual annotations continue to be publicly available through an online evaluation system as an ongoing benchmarking resource

    On Improving Generalization of CNN-Based Image Classification with Delineation Maps Using the CORF Push-Pull Inhibition Operator

    Get PDF
    Deployed image classification pipelines are typically dependent on the images captured in real-world environments. This means that images might be affected by different sources of perturbations (e.g. sensor noise in low-light environments). The main challenge arises by the fact that image quality directly impacts the reliability and consistency of classification tasks. This challenge has, hence, attracted wide interest within the computer vision communities. We propose a transformation step that attempts to enhance the generalization ability of CNN models in the presence of unseen noise in the test set. Concretely, the delineation maps of given images are determined using the CORF push-pull inhibition operator. Such an operation transforms an input image into a space that is more robust to noise before being processed by a CNN. We evaluated our approach on the Fashion MNIST data set with an AlexNet model. It turned out that the proposed CORF-augmented pipeline achieved comparable results on noise-free images to those of a conventional AlexNet classification model without CORF delineation maps, but it consistently achieved significantly superior performance on test images perturbed with different levels of Gaussian and uniform noise
    • …
    corecore