888 research outputs found

    Quantitative Analysis of Radiation-Associated Parenchymal Lung Change

    Get PDF
    Radiation-induced lung damage (RILD) is a common consequence of thoracic radiotherapy (RT). We present here a novel classification of the parenchymal features of RILD. We developed a deep learning algorithm (DLA) to automate the delineation of 5 classes of parenchymal texture of increasing density. 200 scans were used to train and validate the network and the remaining 30 scans were used as a hold-out test set. The DLA automatically labelled the data with Dice Scores of 0.98, 0.43, 0.26, 0.47 and 0.92 for the 5 respective classes. Qualitative evaluation showed that the automated labels were acceptable in over 80% of cases for all tissue classes, and achieved similar ratings to the manual labels. Lung registration was performed and the effect of radiation dose on each tissue class and correlation with respiratory outcomes was assessed. The change in volume of each tissue class over time generated by manual and automated segmentation was calculated. The 5 parenchymal classes showed distinct temporal patterns We quantified the volumetric change in textures after radiotherapy and correlate these with radiotherapy dose and respiratory outcomes. The effect of local dose on tissue class revealed a strong dose-dependent relationship We have developed a novel classification of parenchymal changes associated with RILD that show a convincing dose relationship. The tissue classes are related to both global and local dose metrics, and have a distinct evolution over time. Although less strong, there is a relationship between the radiological texture changes we can measure and respiratory outcomes, particularly the MRC score which directly represents a patient’s functional status. We have demonstrated the potential of using our approach to analyse and understand the morphological and functional evolution of RILD in greater detail than previously possible

    Deep Learning with Limited Labels for Medical Imaging

    Get PDF
    Recent advancements in deep learning-based AI technologies provide an automatic tool to revolutionise medical image computing. Training a deep learning model requires a large amount of labelled data. Acquiring labels for medical images is extremely challenging due to the high cost in terms of both money and time, especially for the pixel-wise segmentation task of volumetric medical scans. However, obtaining unlabelled medical scans is relatively easier compared to acquiring labels for those images. This work addresses the pervasive issue of limited labels in training deep learning models for medical imaging. It begins by exploring different strategies of entropy regularisation in the joint training of labelled and unlabelled data to reduce the time and cost associated with manual labelling for medical image segmentation. Of particular interest are consistency regularisation and pseudo labelling. Specifically, this work proposes a well-calibrated semi-supervised segmentation framework that utilises consistency regularisation on different morphological feature perturbations, representing a significant step towards safer AI in medical imaging. Furthermore, it reformulates pseudo labelling in semi-supervised learning as an Expectation-Maximisation framework. Building upon this new formulation, the work explains the empirical successes of pseudo labelling and introduces a generalisation of the technique, accompanied by variational inference to learn its true posterior distribution. The applications of pseudo labelling in segmentation tasks are also presented. Lastly, this work explores unsupervised deep learning for parameter estimation of diffusion MRI signals, employing a hierarchical variational clustering framework and representation learning

    A Survey on Deep Learning in Medical Image Analysis

    Full text link
    Deep learning algorithms, in particular convolutional networks, have rapidly become a methodology of choice for analyzing medical images. This paper reviews the major deep learning concepts pertinent to medical image analysis and summarizes over 300 contributions to the field, most of which appeared in the last year. We survey the use of deep learning for image classification, object detection, segmentation, registration, and other tasks and provide concise overviews of studies per application area. Open challenges and directions for future research are discussed.Comment: Revised survey includes expanded discussion section and reworked introductory section on common deep architectures. Added missed papers from before Feb 1st 201

    Machine Learning And Quantitative Neuroimaging In Epilepsy And Low Field Mri

    Get PDF
    Medical imaging plays a key role in the diagnosis and management of neurological disorders. Magnetic resonance imaging (MRI) has proven particularly useful, as it produces high resolution images with excellent tissue contrast, permitting clinicians to identify lesions and select appropriate treatments. However, demand for MRI services has outpaced the availability of qualified experts to operate, maintain, and interpret images from these devices. Radiologists often rely on time-consuming manual analyses, which further limits throughput. Moreover, a large portion of the world’s population cannot currently access MRI, and demand for medical imaging services will continue to increase as healthcare quality improves globally. To address these challenges, we must find innovative ways to automate medical processing and produce lower-cost medical imaging devices. Recent advances in deep learning and low-field MRI hardware offer potential solutions, providing lower-cost methods for processing and collecting images, respectively. This thesis aims to develop and validate lower-cost methods for collecting and interpreting neuroimaging using machine learning algorithms and portable, low-field MRI technology. In the first section, I develop a deep learning algorithm that automatically segments resection cavities in epilepsy surgery patients and quantifies removed tissues. I also compare the impacts of epilepsy surgery on remote brain regions, demonstrating that more selective procedures minimize postoperative cortical thinning. In the second section, I explore and validate clinical applications for a new portable, low-field MRI device. Using open-source imaging and machine learning, I propose a low-cost method for simulating diagnostic performance for novel imaging devices when only sparse data is available. Additionally, I validate device performance in multiple sclerosis by directly comparing the low-field device to standard-of-care imaging using a range of manual and automated analyses. My hope is that machine learning and low-field MRI will increase medical imaging access and improve patient care worldwide

    Advanced Computational Methods for Oncological Image Analysis

    Get PDF
    [Cancer is the second most common cause of death worldwide and encompasses highly variable clinical and biological scenarios. Some of the current clinical challenges are (i) early diagnosis of the disease and (ii) precision medicine, which allows for treatments targeted to specific clinical cases. The ultimate goal is to optimize the clinical workflow by combining accurate diagnosis with the most suitable therapies. Toward this, large-scale machine learning research can define associations among clinical, imaging, and multi-omics studies, making it possible to provide reliable diagnostic and prognostic biomarkers for precision oncology. Such reliable computer-assisted methods (i.e., artificial intelligence) together with clinicians’ unique knowledge can be used to properly handle typical issues in evaluation/quantification procedures (i.e., operator dependence and time-consuming tasks). These technical advances can significantly improve result repeatability in disease diagnosis and guide toward appropriate cancer care. Indeed, the need to apply machine learning and computational intelligence techniques has steadily increased to effectively perform image processing operations—such as segmentation, co-registration, classification, and dimensionality reduction—and multi-omics data integration.

    Functional Magnetic Resonance Imaging

    Get PDF
    "Functional Magnetic Resonance Imaging - Advanced Neuroimaging Applications" is a concise book on applied methods of fMRI used in assessment of cognitive functions in brain and neuropsychological evaluation using motor-sensory activities, language, orthographic disabilities in children. The book will serve the purpose of applied neuropsychological evaluation methods in neuropsychological research projects, as well as relatively experienced psychologists and neuroscientists. Chapters are arranged in the order of basic concepts of fMRI and physiological basis of fMRI after event-related stimulus in first two chapters followed by new concepts of fMRI applied in constraint-induced movement therapy; reliability analysis; refractory SMA epilepsy; consciousness states; rule-guided behavioral analysis; orthographic frequency neighbor analysis for phonological activation; and quantitative multimodal spectroscopic fMRI to evaluate different neuropsychological states

    U-net and its variants for medical image segmentation: A review of theory and applications

    Get PDF
    U-net is an image segmentation technique developed primarily for image segmentation tasks. These traits provide U-net with a high utility within the medical imaging community and have resulted in extensive adoption of U-net as the primary tool for segmentation tasks in medical imaging. The success of U-net is evident in its widespread use in nearly all major image modalities, from CT scans and MRI to Xrays and microscopy. Furthermore, while U-net is largely a segmentation tool, there have been instances of the use of U-net in other applications. Given that U-net’s potential is still increasing, this narrative literature review examines the numerous developments and breakthroughs in the U-net architecture and provides observations on recent trends. We also discuss the many innovations that have advanced in deep learning and discuss how these tools facilitate U-net. In addition, we review the different image modalities and application areas that have been enhanced by U-net
    • …
    corecore