47 research outputs found

    Feature-driven Volume Visualization of Medical Imaging Data

    Get PDF
    Direct volume rendering (DVR) is a volume visualization technique that has been proved to be a very powerful tool in many scientific visualization domains. Diagnostic medical imaging is one such domain in which DVR provides new capabilities for the analysis of complex cases and improves the efficiency of image interpretation workflows. However, the full potential of DVR in the medical domain has not yet been realized. A major obstacle for a better integration of DVR in the medical domain is the time-consuming process to optimize the rendering parameters that are needed to generate diagnostically relevant visualizations in which the important features that are hidden in image volumes are clearly displayed, such as shape and spatial localization of tumors, its relationship with adjacent structures, and temporal changes in the tumors. In current workflows, clinicians must manually specify the transfer function (TF), view-point (camera), clipping planes, and other visual parameters. Another obstacle for the adoption of DVR to the medical domain is the ever increasing volume of imaging data. The advancement of imaging acquisition techniques has led to a rapid expansion in the size of the data, in the forms of higher resolutions, temporal imaging acquisition to track treatment responses over time, and an increase in the number of imaging modalities that are used for a single procedure. The manual specification of the rendering parameters under these circumstances is very challenging. This thesis proposes a set of innovative methods that visualize important features in multi-dimensional and multi-modality medical images by automatically or semi-automatically optimizing the rendering parameters. Our methods enable visualizations necessary for the diagnostic procedure in which 2D slice of interest (SOI) can be augmented with 3D anatomical contextual information to provide accurate spatial localization of 2D features in the SOI; the rendering parameters are automatically computed to guarantee the visibility of 3D features; and changes in 3D features can be tracked in temporal data under the constraint of consistent contextual information. We also present a method for the efficient computation of visibility histograms (VHs) using adaptive binning, which allows our optimal DVR to be automated and visualized in real-time. We evaluated our methods by producing visualizations for a variety of clinically relevant scenarios and imaging data sets. We also examined the computational performance of our methods for these scenarios

    Feature-driven Volume Visualization of Medical Imaging Data

    Get PDF
    Direct volume rendering (DVR) is a volume visualization technique that has been proved to be a very powerful tool in many scientific visualization domains. Diagnostic medical imaging is one such domain in which DVR provides new capabilities for the analysis of complex cases and improves the efficiency of image interpretation workflows. However, the full potential of DVR in the medical domain has not yet been realized. A major obstacle for a better integration of DVR in the medical domain is the time-consuming process to optimize the rendering parameters that are needed to generate diagnostically relevant visualizations in which the important features that are hidden in image volumes are clearly displayed, such as shape and spatial localization of tumors, its relationship with adjacent structures, and temporal changes in the tumors. In current workflows, clinicians must manually specify the transfer function (TF), view-point (camera), clipping planes, and other visual parameters. Another obstacle for the adoption of DVR to the medical domain is the ever increasing volume of imaging data. The advancement of imaging acquisition techniques has led to a rapid expansion in the size of the data, in the forms of higher resolutions, temporal imaging acquisition to track treatment responses over time, and an increase in the number of imaging modalities that are used for a single procedure. The manual specification of the rendering parameters under these circumstances is very challenging. This thesis proposes a set of innovative methods that visualize important features in multi-dimensional and multi-modality medical images by automatically or semi-automatically optimizing the rendering parameters. Our methods enable visualizations necessary for the diagnostic procedure in which 2D slice of interest (SOI) can be augmented with 3D anatomical contextual information to provide accurate spatial localization of 2D features in the SOI; the rendering parameters are automatically computed to guarantee the visibility of 3D features; and changes in 3D features can be tracked in temporal data under the constraint of consistent contextual information. We also present a method for the efficient computation of visibility histograms (VHs) using adaptive binning, which allows our optimal DVR to be automated and visualized in real-time. We evaluated our methods by producing visualizations for a variety of clinically relevant scenarios and imaging data sets. We also examined the computational performance of our methods for these scenarios

    Stacked fully convolutional networks with multi-channel learning: application to medical image segmentation

    Get PDF
    The automated segmentation of regions of interest (ROIs) in medical imaging is the fundamental requirement for the derivation of high-level semantics for image analysis in clinical decision support systems. Traditional segmentation approaches such as region-based depend heavily upon hand-crafted features and a priori knowledge of the user. As such, these methods are difficult to adopt within a clinical environment. Recently, methods based on fully convolutional networks (FCN) have achieved great success in the segmentation of general images. FCNs leverage a large labeled dataset to hierarchically learn the features that best correspond to the shallow appearance as well as the deep semantics of the images. However, when applied to medical images, FCNs usually produce coarse ROI detection and poor boundary definitions primarily due to the limited number of labeled training data and limited constraints of label agreement among neighboring similar pixels. In this paper, we propose a new stacked FCN architecture with multi-channel learning (SFCN-ML). We embed the FCN in a stacked architecture to learn the foreground ROI features and background non-ROI features separately and then integrate these different channels to produce the final segmentation result. In contrast to traditional FCN methods, our SFCN-ML architecture enables the visual attributes and semantics derived from both the fore- and background channels to be iteratively learned and inferred. We conducted extensive experiments on three public datasets with a variety of visual challenges. Our results show that our SFCN-ML is more effective and robust than a routine FCN and its variants, and other state-of-the-art methods

    Deep Networks Based Energy Models for Object Recognition from Multimodality Images

    Get PDF
    Object recognition has been extensively investigated in computer vision area, since it is a fundamental and essential technique in many important applications, such as robotics, auto-driving, automated manufacturing, and security surveillance. According to the selection criteria, object recognition mechanisms can be broadly categorized into object proposal and classification, eye fixation prediction and saliency object detection. Object proposal tends to capture all potential objects from natural images, and then classify them into predefined groups for image description and interpretation. For a given natural image, human perception is normally attracted to the most visually important regions/objects. Therefore, eye fixation prediction attempts to localize some interesting points or small regions according to human visual system (HVS). Based on these interesting points and small regions, saliency object detection algorithms propagate the important extracted information to achieve a refined segmentation of the whole salient objects. In addition to natural images, object recognition also plays a critical role in clinical practice. The informative insights of anatomy and function of human body obtained from multimodality biomedical images such as magnetic resonance imaging (MRI), transrectal ultrasound (TRUS), computed tomography (CT) and positron emission tomography (PET) facilitate the precision medicine. Automated object recognition from biomedical images empowers the non-invasive diagnosis and treatments via automated tissue segmentation, tumor detection and cancer staging. The conventional recognition methods normally utilize handcrafted features (such as oriented gradients, curvature, Haar features, Haralick texture features, Laws energy features, etc.) depending on the image modalities and object characteristics. It is challenging to have a general model for object recognition. Superior to handcrafted features, deep neural networks (DNN) can extract self-adaptive features corresponding with specific task, hence can be employed for general object recognition models. These DNN-features are adjusted semantically and cognitively by over tens of millions parameters corresponding to the mechanism of human brain, therefore leads to more accurate and robust results. Motivated by it, in this thesis, we proposed DNN-based energy models to recognize object on multimodality images. For the aim of object recognition, the major contributions of this thesis can be summarized below: 1. We firstly proposed a new comprehensive autoencoder model to recognize the position and shape of prostate from magnetic resonance images. Different from the most autoencoder-based methods, we focused on positive samples to train the model in which the extracted features all come from prostate. After that, an image energy minimization scheme was applied to further improve the recognition accuracy. The proposed model was compared with three classic classifiers (i.e. support vector machine with radial basis function kernel, random forest, and naive Bayes), and demonstrated significant superiority for prostate recognition on magnetic resonance images. We further extended the proposed autoencoder model for saliency object detection on natural images, and the experimental validation proved the accurate and robust saliency object detection results of our model. 2. A general multi-contexts combined deep neural networks (MCDN) model was then proposed for object recognition from natural images and biomedical images. Under one uniform framework, our model was performed in multi-scale manner. Our model was applied for saliency object detection from natural images as well as prostate recognition from magnetic resonance images. Our experimental validation demonstrated that the proposed model was competitive to current state-of-the-art methods. 3. We designed a novel saliency image energy to finely segment salient objects on basis of our MCDN model. The region priors were taken into account in the energy function to avoid trivial errors. Our method outperformed state-of-the-art algorithms on five benchmarking datasets. In the experiments, we also demonstrated that our proposed saliency image energy can boost the results of other conventional saliency detection methods

    Medical Imaging Biomarker Discovery and Integration Towards AI-Based Personalized Radiotherapy.

    Get PDF
    Intensity-modulated radiation therapy (IMRT) has been used for high-accurate physical dose distribution sculpture and employed to modulate different dose levels into Gross Tumor Volume (GTV), Clinical Target Volume (CTV) and Planning Target Volume (PTV). GTV, CTV and PTV can be prescribed at different dose levels, however, there is an emphasis that their dose distributions need to be uniform, despite the fact that most types of tumour are heterogeneous. With traditional radiomics and artificial intelligence (AI) techniques, we can identify biological target volume from functional images against conventional GTV derived from anatomical imaging. Functional imaging, such as multi parameter MRI and PET can be used to implement dose painting, which allows us to achieve dose escalation by increasing doses in certain areas that are therapy-resistant in the GTV and reducing doses in less aggressive areas. In this review, we firstly discuss several quantitative functional imaging techniques including PET-CT and multi-parameter MRI. Furthermore, theoretical and experimental comparisons for dose painting by contours (DPBC) and dose painting by numbers (DPBN), along with outcome analysis after dose painting are provided. The state-of-the-art AI-based biomarker diagnosis techniques is reviewed. Finally, we conclude major challenges and future directions in AI-based biomarkers to improve cancer diagnosis and radiotherapy treatment

    Development of computer-based algorithms for unsupervised assessment of radiotherapy contouring

    Get PDF
    INTRODUCTION: Despite the advances in radiotherapy treatment delivery, target volume delineation remains one of the greatest sources of error in the radiotherapy delivery process, which can lead to poor tumour control probability and impact clinical outcome. Contouring assessments are performed to ensure high quality of target volume definition in clinical trials but this can be subjective and labour-intensive. This project addresses the hypothesis that computational segmentation techniques, with a given prior, can be used to develop an image-based tumour delineation process for contour assessments. This thesis focuses on the exploration of the segmentation techniques to develop an automated method for generating reference delineations in the setting of advanced lung cancer. The novelty of this project is in the use of the initial clinician outline as a prior for image segmentation. METHODS: Automated segmentation processes were developed for stage II and III non-small cell lung cancer using the IDEAL-CRT clinical trial dataset. Marker-controlled watershed segmentation, two active contour approaches (edge- and region-based) and graph-cut applied on superpixels were explored. k-nearest neighbour (k-NN) classification of tumour from normal tissues based on texture features was also investigated. RESULTS: 63 cases were used for development and training. Segmentation and classification performance were evaluated on an independent test set of 16 cases. Edge-based active contour segmentation achieved highest Dice similarity coefficient of 0.80 ± 0.06, followed by graphcut at 0.76 ± 0.06, watershed at 0.72 ± 0.08 and region-based active contour at 0.71 ± 0.07, with mean computational times of 192 ± 102 sec, 834 ± 438 sec, 21 ± 5 sec and 45 ± 18 sec per case respectively. Errors in accuracy of irregularly shaped lesions and segmentation leakages at the mediastinum were observed. In the distinction of tumour and non-tumour regions, misclassification errors of 14.5% and 15.5% were achieved using 16- and 8-pixel regions of interest (ROIs) respectively. Higher misclassification errors of 24.7% and 26.9% for 16- and 8-pixel ROIs were obtained in the analysis of the tumour boundary. CONCLUSIONS: Conventional image-based segmentation techniques with the application of priors are useful in automatic segmentation of tumours, although further developments are required to improve their performance. Texture classification can be useful in distinguishing tumour from non-tumour tissue, but the segmentation task at the tumour boundary is more difficult. Future work with deep-learning segmentation approaches need to be explored.Funded by National Radiotherapy Trials Quality Assurance (RTTQA) grou

    Implementation of a clinical dosimetry workflow to perform personalized dosimetry for internal radiotherapy

    Get PDF
    La médecine nucléaire est une spécialité médicale qui étudie la physiologie des organes et le métabolisme de divers types de tumeurs. La médecine nucléaire utilise des produits pharmaceutiques liés à un isotope radioactif. La radiothérapie interne vectorisée (RIV) est une spécialité de la médecine nucléaire où le vecteur est dirigé vers des cibles, généralement des tumeurs, et où l'action des rayonnements ionisants vise à détruire les tumeurs. Le suivi et l'optimisation de la RMT nécessitent l'évaluation de l'irradiation délivrée au patient (dosimétrie). Il y a un manque de standardisation en dosimétrie interne. Cette thèse propose une approche standardisée avec des flux de travail descriptifs pour la dosimétrie clinique. Un logiciel appelé OpenDose3D, basé sur 3D-Slicer en tant que module open source mettant en oeuvre les flux de travail proposés, est développé, validé et mis à la disposition du public. Le module a été utilisé en recherche clinique dans le projet MEDIRAD.Nuclear medicine is a medical specialty that studies the physiology of organs and the metabo-lism of various types of tumors. Nuclear medicine uses pharmaceuticals bound to a radioactive isotope. Molecular radiotherapy (MRT) is a specialty of nuclear medicine where the vector is directed to targets, usually tumors, and the action of ionizing radiation is aimed at destroying tumors. The follow-up and optimization of MRT requires the evaluation of the irradiation delivered to the patient (dosimetry). There is a lack of standardization in internal dosimetry. This thesis provides a standardized approach with descriptive clinical dosimetry workflows. A software named OpenDose3D, based in 3D-Slicer and implementing the proposed workflows was developed, validated and was made publicly available as an open source module. The module was used in clinical research within the MEDIRAD project

    FROM GREY-LEVELS TO NUMBERS: INVESTIGATION OF RADIOMIC FEATURE ROBUSTNESS IN CT IMAGES OF LUNG TUMOURS

    Get PDF
    corecore