8 research outputs found

    Deep learning in structural and functional lung image analysis.

    Get PDF
    The recent resurgence of deep learning (DL) has dramatically influenced the medical imaging field. Medical image analysis applications have been at the forefront of DL research efforts applied to multiple diseases and organs, including those of the lungs. The aims of this review are twofold: (i) to briefly overview DL theory as it relates to lung image analysis; (ii) to systematically review the DL research literature relating to the lung image analysis applications of segmentation, reconstruction, registration and synthesis. The review was conducted following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines. 479 studies were initially identified from the literature search with 82 studies meeting the eligibility criteria. Segmentation was the most common lung image analysis DL application (65.9% of papers reviewed). DL has shown impressive results when applied to segmentation of the whole lung and other pulmonary structures. DL has also shown great potential for applications in image registration, reconstruction and synthesis. However, the majority of published studies have been limited to structural lung imaging with only 12.9% of reviewed studies employing functional lung imaging modalities, thus highlighting significant opportunities for further research in this field. Although the field of DL in lung image analysis is rapidly expanding, concerns over inconsistent validation and evaluation strategies, intersite generalisability, transparency of methodological detail and interpretability need to be addressed before widespread adoption in clinical lung imaging workflow

    The State of Applying Artificial Intelligence to Tissue Imaging for Cancer Research and Early Detection

    Full text link
    Artificial intelligence represents a new frontier in human medicine that could save more lives and reduce the costs, thereby increasing accessibility. As a consequence, the rate of advancement of AI in cancer medical imaging and more particularly tissue pathology has exploded, opening it to ethical and technical questions that could impede its adoption into existing systems. In order to chart the path of AI in its application to cancer tissue imaging, we review current work and identify how it can improve cancer pathology diagnostics and research. In this review, we identify 5 core tasks that models are developed for, including regression, classification, segmentation, generation, and compression tasks. We address the benefits and challenges that such methods face, and how they can be adapted for use in cancer prevention and treatment. The studies looked at in this paper represent the beginning of this field and future experiments will build on the foundations that we highlight

    Investigating Ensembles of Single-class Classifiers for Multi-class Classification

    Get PDF
    Traditional methods of multi-class classification in machine learning involve the use of a monolithic feature extractor and classifier head trained on data from all of the classes at once. These architectures (especially the classifier head) are dependent on the number and types of classes, and are therefore rigid against changes to the class set. For best performance, one must retrain networks with these architectures from scratch, incurring a large cost in training time. As well, these networks can be biased towards classes with a large imbalance in training data compared to other classes. Instead, ensembles of so-called ``single-class'' classifiers can be used for multi-class classification by training an individual network for each class.We show that these ensembles of single-class classifiers are more flexible to changes to the class set than traditional models, and can be quickly retrained to consider small changes to the class set, such as by adding, removing, splitting, or fusing classes. As well, we show that these ensembles are less biased towards classes with large imbalances in their training data than traditional models. We also introduce a new, more powerful single-class classification architecture. These models are trained and tested on a plant disease dataset with high variance in the number of classes and amount of data in each class, as well as on an Alzheimer's dataset with low amounts of data and a large imbalance in data between classes

    Adaptive Feature Medical Segmentation Network: an adaptable deep learning paradigm for high-performance 3D brain lesion segmentation in medical imaging

    Get PDF
    IntroductionIn neurological diagnostics, accurate detection and segmentation of brain lesions is crucial. Identifying these lesions is challenging due to its complex morphology, especially when using traditional methods. Conventional methods are either computationally demanding with a marginal impact/enhancement or sacrifice fine details for computational efficiency. Therefore, balancing performance and precision in compute-intensive medical imaging remains a hot research topic.MethodsWe introduce a novel encoder-decoder network architecture named the Adaptive Feature Medical Segmentation Network (AFMS-Net) with two encoder variants: the Single Adaptive Encoder Block (SAEB) and the Dual Adaptive Encoder Block (DAEB). A squeeze-and-excite mechanism is employed in SAEB to identify significant data while disregarding peripheral details. This approach is best suited for scenarios requiring quick and efficient segmentation, with an emphasis on identifying key lesion areas. In contrast, the DAEB utilizes an advanced channel spatial attention strategy for fine-grained delineation and multiple-class classifications. Additionally, both architectures incorporate a Segmentation Path (SegPath) module between the encoder and decoder, refining segmentation, enhancing feature extraction, and improving model performance and stability.ResultsAFMS-Net demonstrates exceptional performance across several notable datasets, including BRATs 2021, ATLAS 2021, and ISLES 2022. Its design aims to construct a lightweight architecture capable of handling complex segmentation challenges with high precision.DiscussionThe proposed AFMS-Net addresses the critical balance issue between performance and computational efficiency in the segmentation of brain lesions. By introducing two tailored encoder variants, the network adapts to varying requirements of speed and feature. This approach not only advances the state-of-the-art in lesion segmentation but also provides a scalable framework for future research in medical image processing

    The role of deep learning in structural and functional lung imaging

    Get PDF
    Background: Structural and functional lung imaging are critical components of pulmonary patient care. Image analysis methods, such as image segmentation, applied to structural and functional lung images, have significant benefits for patients with lung pathologies, including the computation of clinical biomarkers. Traditionally, machine learning (ML) approaches, such as clustering, and computational modelling techniques, such as CT-ventilation imaging, have been used for segmentation and synthesis, respectively. Deep learning (DL) has shown promise in medical image analysis tasks, often outperforming alternative methods. Purpose: To address the hypothesis that DL can outperform conventional ML and classical image analysis methods for the segmentation and synthesis of structural and functional lung imaging via: i. development and comparison of 3D convolutional neural networks (CNNs) for the segmentation of ventilated lung using hyperpolarised (HP) gas MRI. ii. development of a generalisable, multi-centre CNN for segmentation of the lung cavity using 1H-MRI. iii. the proposal of a framework for estimating the lung cavity in the spatial domain of HP gas MRI. iv. development of a workflow to synthesise HP gas MRI from multi-inflation, non-contrast CT. v. the proposal of a framework for the synthesis of fully-volumetric HP gas MRI ventilation from a large, diverse dataset of non-contrast, multi-inflation 1H-MRI scans. Methods: i. A 3D CNN-based method for the segmentation of ventilated lung using HP gas MRI was developed and CNN parameters, such as architecture, loss function and pre-processing were optimised. ii. A 3D CNN trained on a multi-acquisition dataset and validated on data from external centres was compared with a 2D alternative for the segmentation of the lung cavity using 1H-MRI. iii. A dual-channel, multi-modal segmentation framework was compared to single-channel approaches for estimation of the lung cavity in the domain of HP gas MRI. iv. A hybrid data-driven and model-based approach for the synthesis of HP gas MRI ventilation from CT was compared to approaches utilising DL or computational modelling alone. v. A physics-constrained, multi-channel framework for the synthesis of fully-volumetric ventilation surrogates from 1H-MRI was validated using five-fold cross-validation and an external test data set. Results: i. The 3D CNN, developed via parameterisation experiments, accurately segmented ventilation scans and outperformed conventional ML methods. ii. The 3D CNN produced more accurate segmentations than its 2D analogues for the segmentation of the lung cavity, exhibiting minimal variation in performance between centres, vendors and acquisitions. iii. Dual-channel, multi-modal approaches generate significant improvements compared to methods which use a single imaging modality for the estimation of the lung cavity. iv. The hybrid approach produced synthetic ventilation scans which correlate with HP gas MRI. v. The physics-constrained, 3D multi-channel synthesis framework outperformed approaches which did not integrate computational modelling, demonstrating generalisability to external data. Conclusion: DL approaches demonstrate the ability to segment and synthesise lung MRI across a range of modalities and pulmonary pathologies. These methods outperform computational modelling and classical ML approaches, reducing the time required to adequately edit segmentations and improving the modelling of synthetic ventilation, which may facilitate the clinical translation of DL in structural and functional lung imaging

    ROBUST DEEP LEARNING METHODS FOR SOLVING INVERSE PROBLEMS IN MEDICAL IMAGING

    Get PDF
    The medical imaging field has a long history of incorporating machine learning algorithms to address inverse problems in image acquisition and analysis. With the impressive successes of deep neural networks on natural images, we seek to answer the obvious question: do these successes also transfer to the medical image domain? The answer may seem straightforward on the surface. Tasks like image-to-image transformation, segmentation, detection, etc., have direct applications for medical images. For example, metal artifact reduction for Computed Tomography (CT) and reconstruction from undersampled k-space signal for Magnetic Resonance (MR) imaging can be formulated as an image-to-image transformation; lesion/tumor detection and segmentation are obvious applications for higher level vision tasks. While these tasks may be similar in formulation, many practical constraints and requirements exist in solving these tasks for medical images. Patient data is highly sensitive and usually only accessible from individual institutions. This creates constraints on the available groundtruth, dataset size, and computational resources in these institutions to train performant models. Due to the mission-critical nature in healthcare applications, requirements such as performance robustness and speed are also stringent. As such, the big-data, dense-computation, supervised learning paradigm in mainstream deep learning is often insufficient to address these situations. In this dissertation, we investigate ways to benefit from the powerful representational capacity of deep neural networks while still satisfying the above-mentioned constraints and requirements. The first part of this dissertation focuses on adapting supervised learning to account for variations such as different medical image modality, image quality, architecture designs, tasks, etc. The second part of this dissertation focuses on improving model robustness on unseen data through domain adaptation, which ameliorates performance degradation due to distribution shifts. The last part of this dissertation focuses on self-supervised learning and learning from synthetic data with a focus in tomographic imaging; this is essential in many situations where the desired groundtruth may not be accessible

    Preface

    Get PDF
    corecore