178 research outputs found

    Quantitative Analysis of Radiation-Associated Parenchymal Lung Change

    Get PDF
    Radiation-induced lung damage (RILD) is a common consequence of thoracic radiotherapy (RT). We present here a novel classification of the parenchymal features of RILD. We developed a deep learning algorithm (DLA) to automate the delineation of 5 classes of parenchymal texture of increasing density. 200 scans were used to train and validate the network and the remaining 30 scans were used as a hold-out test set. The DLA automatically labelled the data with Dice Scores of 0.98, 0.43, 0.26, 0.47 and 0.92 for the 5 respective classes. Qualitative evaluation showed that the automated labels were acceptable in over 80% of cases for all tissue classes, and achieved similar ratings to the manual labels. Lung registration was performed and the effect of radiation dose on each tissue class and correlation with respiratory outcomes was assessed. The change in volume of each tissue class over time generated by manual and automated segmentation was calculated. The 5 parenchymal classes showed distinct temporal patterns We quantified the volumetric change in textures after radiotherapy and correlate these with radiotherapy dose and respiratory outcomes. The effect of local dose on tissue class revealed a strong dose-dependent relationship We have developed a novel classification of parenchymal changes associated with RILD that show a convincing dose relationship. The tissue classes are related to both global and local dose metrics, and have a distinct evolution over time. Although less strong, there is a relationship between the radiological texture changes we can measure and respiratory outcomes, particularly the MRC score which directly represents a patient’s functional status. We have demonstrated the potential of using our approach to analyse and understand the morphological and functional evolution of RILD in greater detail than previously possible

    Performance of image guided navigation in laparoscopic liver surgery – A systematic review

    Get PDF
    Background: Compared to open surgery, minimally invasive liver resection has improved short term outcomes. It is however technically more challenging. Navigated image guidance systems (IGS) are being developed to overcome these challenges. The aim of this systematic review is to provide an overview of their current capabilities and limitations. Methods: Medline, Embase and Cochrane databases were searched using free text terms and corresponding controlled vocabulary. Titles and abstracts of retrieved articles were screened for inclusion criteria. Due to the heterogeneity of the retrieved data it was not possible to conduct a meta-analysis. Therefore results are presented in tabulated and narrative format. Results: Out of 2015 articles, 17 pre-clinical and 33 clinical papers met inclusion criteria. Data from 24 articles that reported on accuracy indicates that in recent years navigation accuracy has been in the range of 8–15 mm. Due to discrepancies in evaluation methods it is difficult to compare accuracy metrics between different systems. Surgeon feedback suggests that current state of the art IGS may be useful as a supplementary navigation tool, especially in small liver lesions that are difficult to locate. They are however not able to reliably localise all relevant anatomical structures. Only one article investigated IGS impact on clinical outcomes. Conclusions: Further improvements in navigation accuracy are needed to enable reliable visualisation of tumour margins with the precision required for oncological resections. To enhance comparability between different IGS it is crucial to find a consensus on the assessment of navigation accuracy as a minimum reporting standard

    On Medical Image Segmentation and on Modeling Long Term Dependencies

    Get PDF
    La délimitation (segmentation) des tumeurs malignes à partir d’images médicales est importante pour le diagnostic du cancer, la planification des traitements ciblés, ainsi que les suivis de la progression du cancer et de la réponse aux traitements. Cependant, bien que la segmentation manuelle des images médicales soit précise, elle prend du temps, nécessite des opérateurs experts et est souvent peu pratique lorsque de grands ensembles de données sont utilisés. Ceci démontre la nécessité d’une segmentation automatique. Cependant, la segmentation automatisée des tumeurs est particulièrement difficile en raison de la variabilité de l’apparence des tumeurs, de l’équipement d’acquisition d’image et des paramètres d’acquisition, et de la variabilité entre les patients. Les tumeurs varient en type, taille, emplacement et quantité; le reste de l’image varie en raison des différences anatomiques entre les patients, d’une chirurgie antérieure ou d’une thérapie ablative, de différences dans l’amélioration du contraste des tissus et des artefacts d’image. De plus, les protocoles d’acquisition du scanner varient considérablement entre les cliniques et les caractéristiques de l’image varient selon le modèle du scanner. En raison de toutes ces variabilités, un modèle de segmentation doit être suffisamment flexible pour apprendre les caractéristiques générales des données. L’avènement des réseaux profonds de neurones à convolution (convolutional neural networks, CNN) a permis une classification exacte et précise des images hautement variables et, par extension, une segmentation de haute qualité des images. Cependant, ces modèles doivent être formés sur d’énormes quantités de données étiquetées. Cette contrainte est particulièrement difficile dans le contexte de la segmentation des images médicales, car le nombre de segmentations pouvant être produites est limité dans la pratique par la nécessité d’employer des opérateurs experts pour réaliser un tel étiquetage. De plus, les variabilités d’intérêt dans les images médicales semblent suivre une distribution à longue traîne, ce qui signifie qu’un nombre particulièrement important de données utilisées pour l’entraînement peut être nécessaire pour fournir un échantillon suffisant de chaque type de variabilité à un CNN. Cela démontre la nécessité de développer des stratégies pour la formation de ces modèles avec des segmentations de vérité-terrain disponibles limitées.----------ABSTRACT: The delineation (segmentation) of malignant tumours in medical images is important for cancer diagnosis, the planning of targeted treatments, and the tracking of cancer progression and treatment response. However, although manual segmentation of medical images is accurate, it is time consuming, requires expert operators, and is often impractical with large datasets. This motivates the need for training automated segmentation. However, automated segmentation of tumours is particularly challenging due to variability in tumour appearance, image acquisition equipment and acquisition parameters, and variability across patients. Tumours vary in type, size, location, and quantity; the rest of the image varies due to anatomical differences between patients, prior surgery or ablative therapy, differences in contrast enhancement of tissues, and image artefacts. Furthermore, scanner acquisition protocols vary considerably between clinical sites and image characteristics vary according to the scanner model. Due to all of these variabilities, a segmentation model must be flexible enough to learn general features from the data. The advent of deep convolutional neural networks (CNN) allowed for accurate and precise classification of highly variable images and, by extension, of high quality segmentation images. However, these models must be trained on enormous quantities of labeled data. This constraint is particularly challenging in the context of medical image segmentation because the number of segmentations that can be produced is limited in practice by the need to employ expert operators to do such labeling. Furthermore, the variabilities of interest in medical images appear to follow a long tail distribution, meaning a particularly large amount of training data may be required to provide a sufficient sample of each type of variability to a CNN. This motivates the need to develop strategies for training these models with limited ground truth segmentations available

    Optimization of Decision Making in Personalized Radiation Therapy using Deformable Image Registration

    Get PDF
    Cancer has become one of the dominant diseases worldwide, especially in western countries, and radiation therapy is one of the primary treatment options for 50% of all patients diagnosed. Radiation therapy involves the radiation delivery and planning based on radiobiological models derived primarily from clinical trials. Since 2015 improvements in information technologies and data storage allowed new models to be created using the large volumes of treatment data already available and correlate the actually delivered treatment with outcomes. The goals of this thesis are to 1) construct models of patient outcomes after receiving radiation therapy using available treatment and patient parameters and 2) provide a method to determine real accumulated radiation dose including the impact of registration uncertainties. In Chapter 2, a model was developed predicting overall survival for patients with hepatocellular carcinoma or liver metastasis receiving radiation therapy. These models show which patients benefit from curative radiation therapy based on liver function, and the survival benefit of increased radiation dose on survival. In Chapter 3, a method was developed to routinely evaluate deformable image registration (DIR) with computer-generated landmark pairs using the scale-invariant feature transform. The method presented in this chapter created landmark sets for comparing lung 4DCT images and provided the same evaluation of DIR as manual landmark sets. In Chapter 4, an investigation was performed on the impact of DIR error on dose accumulation using landmarked 4DCT images as the ground truth. The study demonstrated the relationship between dose gradient, DIR error and dose accumulation error, and presented a method to determine error bars on the dose accumulation process. In Chapter 5, a method was presented to determine quantitatively when to update a treatment plan during the course of a multi-fraction radiation treatment of head and neck cancer. This method investigated the ability to use only the planned dose with deformable image registration to predict dose changes caused by anatomical deformations. This thesis presents the fundamental elements of a decision support system including patient pre-treatment parameters and the actual delivered dose using DIR while considering registration uncertainties

    Artificial intelligence and radiomics in magnetic resonance imaging of rectal cancer: a review

    Get PDF
    Rectal cancer (RC) is one of the most common tumours worldwide in both males and females, with significant morbidity and mortality rates, and it accounts for approximately one-third of colorectal cancers (CRCs). Magnetic resonance imaging (MRI) has been demonstrated to be accurate in evaluating the tumour location and stage, mucin content, invasion depth, lymph node (LN) metastasis, extramural vascular invasion (EMVI), and involvement of the mesorectal fascia (MRF). However, these features alone remain insufficient to precisely guide treatment decisions. Therefore, new imaging biomarkers are necessary to define tumour characteristics for staging and restaging patients with RC. During the last decades, RC evaluation via MRI-based radiomics and artificial intelligence (AI) tools has been a research hotspot. The aim of this review was to summarise the achievement of MRI-based radiomics and AI for the evaluation of staging, response to therapy, genotyping, prediction of high-risk factors, and prognosis in the field of RC. Moreover, future challenges and limitations of these tools that need to be solved to favour the transition from academic research to the clinical setting will be discussed

    Artificial Intelligence Techniques in Medical Imaging: A Systematic Review

    Get PDF
    This scientific review presents a comprehensive overview of medical imaging modalities and their diverse applications in artificial intelligence (AI)-based disease classification and segmentation. The paper begins by explaining the fundamental concepts of AI, machine learning (ML), and deep learning (DL). It provides a summary of their different types to establish a solid foundation for the subsequent analysis. The prmary focus of this study is to conduct a systematic review of research articles that examine disease classification and segmentation in different anatomical regions using AI methodologies. The analysis includes a thorough examination of the results reported in each article, extracting important insights and identifying emerging trends. Moreover, the paper critically discusses the challenges encountered during these studies, including issues related to data availability and quality, model generalization, and interpretability. The aim is to provide guidance for optimizing technique selection. The analysis highlights the prominence of hybrid approaches, which seamlessly integrate ML and DL techniques, in achieving effective and relevant results across various disease types. The promising potential of these hybrid models opens up new opportunities for future research in the field of medical diagnosis. Additionally, addressing the challenges posed by the limited availability of annotated medical images through the incorporation of medical image synthesis and transfer learning techniques is identified as a crucial focus for future research efforts

    Surface loss for medical image segmentation

    Get PDF
    Last decades have witnessed an unprecedented expansion of medical data in various largescale and complex systems. While achieving a lot of successes in many complex medical problems, there are still some challenges to deal with. Class imbalance is one of the common problems of medical image segmentation. It occurs mostly when there is a severely unequal class distribution, for instance, when the size of target foreground region is several orders of magnitude less that the background region size. In such problems, typical loss functions used for convolutional neural networks (CNN) segmentation fail to deliver good performances. Widely used losses,e.g., Dice or cross-entropy, are based on regional terms. They assume that all classes are equally distributed. Thus, they tend to favor the majority class and misclassify the target class. To address this issue, the main objective of this work is to build a boundary loss, a distance based measure on the space of contours and not regions. We argue that a boundary loss can mitigate the problems of regional losses via introducing a complementary distance-based information. Our loss is inspired by discrete (graph-based) optimization techniques for computing gradient flows of curve evolution. Following an integral approach for computing boundary variations, we express a non-symmetric L2 distance on the space of shapes as a regional integral, which avoids completely local differential computations. Our boundary loss is the sum of linear functions of the regional softmax probability outputs of the network. Therefore, it can easily be combined with standard regional losses and implemented with any existing deep network architecture for N-dimensional segmentation (N-D). Experiments were carried on three benchmark datasets corresponding to increasingly unbalanced segmentation problems: Multi modal brain tumor segmentation (BRATS17), the ischemic stroke lesion (ISLES) and white matter hyperintensities (WMH). Used in conjunction with the region-based generalized Dice loss (GDL), our boundary loss improves performance significantly compared to GDL alone, reaching up to 8% improvement in Dice score and 10% improvement in Hausdorff score. It also yielded a more stable learning process

    Liver segmentation using 3D CT scans.

    Get PDF
    Master of Science in Computer Science. University of KwaZulu-Natal, Durban, 2018.Abstract available in PDF file

    Automated Image-Based Procedures for Adaptive Radiotherapy

    Get PDF
    • …
    corecore