280 research outputs found

    A review on methods to estimate a CT from MRI data in the context of MRI-alone RT

    Get PDF
    Background: In recent years, Radiation Therapy (RT) has undergone many developments and provided progress in the field of cancer treatment. However, dose optimisation each treatment session puts the patient at risk of successive X-Ray exposure from Computed Tomography CT scans since this imaging modality is the reference for dose planning. Add to this difficulties related to contour propagation. Thus, approaches are focusing on the use of MRI as the only modality in RT. In this paper, we review methods for creating pseudo-CT images from MRI data for MRI-alone RT. Each class of methods is explained and underlying works are presented in detail with performance results. We discuss the advantages and limitations of each class. Methods: We classified recent works in deriving a pseudo-CT from MR images into four classes: segmentation-based, intensity-based, atlas-based and hybrid methods and the classification was based on considering the general technique applied. Results: Most research focused on the brain and the pelvic regions. The mean absolute error ranged from 80 to 137 HU and from 36.4 to 74 HU for the brain and pelvis, respectively. In addition, an interest in the Dixon MR sequence is increasing since it has the advantage of producing multiple contrast images with a single acquisition. Conclusion: Radiation therapy is emerging towards the generalisation of MRI-only RT thanks to the advances in techniques for generation of pseudo-CT images and the development of specialised MR sequences favouring bone visualisation. However, a benchmark needs to be established to set in common performance metrics to assess the quality of the generated pseudo-CT and judge on the efficiency of a certain method

    Development of registration methods for cardiovascular anatomy and function using advanced 3T MRI, 320-slice CT and PET imaging

    Get PDF
    Different medical imaging modalities provide complementary anatomical and functional information. One increasingly important use of such information is in the clinical management of cardiovascular disease. Multi-modality data is helping improve diagnosis accuracy, and individualize treatment. The Clinical Research Imaging Centre at the University of Edinburgh, has been involved in a number of cardiovascular clinical trials using longitudinal computed tomography (CT) and multi-parametric magnetic resonance (MR) imaging. The critical image processing technique that combines the information from all these different datasets is known as image registration, which is the topic of this thesis. Image registration, especially multi-modality and multi-parametric registration, remains a challenging field in medical image analysis. The new registration methods described in this work were all developed in response to genuine challenges in on-going clinical studies. These methods have been evaluated using data from these studies. In order to gain an insight into the building blocks of image registration methods, the thesis begins with a comprehensive literature review of state-of-the-art algorithms. This is followed by a description of the first registration method I developed to help track inflammation in aortic abdominal aneurysms. It registers multi-modality and multi-parametric images, with new contrast agents. The registration framework uses a semi-automatically generated region of interest around the aorta. The aorta is aligned based on a combination of the centres of the regions of interest and intensity matching. The method achieved sub-voxel accuracy. The second clinical study involved cardiac data. The first framework failed to register many of these datasets, because the cardiac data suffers from a common artefact of magnetic resonance images, namely intensity inhomogeneity. Thus I developed a new preprocessing technique that is able to correct the artefacts in the functional data using data from the anatomical scans. The registration framework, with this preprocessing step and new particle swarm optimizer, achieved significantly improved registration results on the cardiac data, and was validated quantitatively using neuro images from a clinical study of neonates. Although on average the new framework achieved accurate results, when processing data corrupted by severe artefacts and noise, premature convergence of the optimizer is still a common problem. To overcome this, I invented a new optimization method, that achieves more robust convergence by encoding prior knowledge of registration. The registration results from this new registration-oriented optimizer are more accurate than other general-purpose particle swarm optimization methods commonly applied to registration problems. In summary, this thesis describes a series of novel developments to an image registration framework, aimed to improve accuracy, robustness and speed. The resulting registration framework was applied to, and validated by, different types of images taken from several ongoing clinical trials. In the future, this framework could be extended to include more diverse transformation models, aided by new machine learning techniques. It may also be applied to the registration of other types and modalities of imaging data

    Multi-Modality Imaging: A Software Fusion and Image-Guided Therapy Perspective

    Get PDF
    With the introduction of computers in medical imaging, which were popularized with the presentation of Hounsfield's ground-breaking work in 1971, numerical image reconstruction and analysis of medical images became a vital part of medical imaging research. While mathematical aspects of reconstruction dominated research in the beginning, a growing body of literature attests to the progress made over the past 30 years in image fusion. This article describes the historical development of non-deformable software-based image co-registration and it's role in the context of hybrid imaging and provides an outlook on future developments

    Improvements in the registration of multimodal medical imaging : application to intensity inhomogeneity and partial volume corrections

    Get PDF
    Alignment or registration of medical images has a relevant role on clinical diagnostic and treatment decisions as well as in research settings. With the advent of new technologies for multimodal imaging, robust registration of functional and anatomical information is still a challenge, particular in small-animal imaging given the lesser structural content of certain anatomical parts, such as the brain, than in humans. Besides, patient-dependent and acquisition artefacts affecting the images information content further complicate registration, as is the case of intensity inhomogeneities (IIH) showing in MRI and the partial volume effect (PVE) attached to PET imaging. Reference methods exist for accurate image registration but their performance is severely deteriorated in situations involving little images Overlap. While several approaches to IIH and PVE correction exist these methods still do not guarantee or rely on robust registration. This Thesis focuses on overcoming current limitations af registration to enable novel IIH and PVE correction methods.El registre d'imatges mèdiques té un paper rellevant en les decisions de diagnòstic i tractament clíniques així com en la recerca. Amb el desenvolupament de noves tecnologies d'imatge multimodal, el registre robust d'informació funcional i anatòmica és encara avui un repte, en particular, en imatge de petit animal amb un menor contingut estructural que en humans de certes parts anatòmiques com el cervell. A més, els artefactes induïts pel propi pacient i per la tècnica d'adquisició que afecten el contingut d'informació de les imatges complica encara més el procés de registre. És el cas de les inhomogeneïtats d'intensitat (IIH) que apareixen a les RM i de l'efecte de volum parcial (PVE) característic en PET. Tot i que existeixen mètodes de referència pel registre acurat d'imatges la seva eficàcia es veu greument minvada en casos de poc solapament entre les imatges. De la mateixa manera, també existeixen mètodes per la correcció d'IIH i de PVE però que no garanteixen o que requereixen un registre robust. Aquesta tesi es centra en superar aquestes limitacions sobre el registre per habilitar nous mètodes per la correcció d'IIH i de PVE

    Feature-driven Volume Visualization of Medical Imaging Data

    Get PDF
    Direct volume rendering (DVR) is a volume visualization technique that has been proved to be a very powerful tool in many scientific visualization domains. Diagnostic medical imaging is one such domain in which DVR provides new capabilities for the analysis of complex cases and improves the efficiency of image interpretation workflows. However, the full potential of DVR in the medical domain has not yet been realized. A major obstacle for a better integration of DVR in the medical domain is the time-consuming process to optimize the rendering parameters that are needed to generate diagnostically relevant visualizations in which the important features that are hidden in image volumes are clearly displayed, such as shape and spatial localization of tumors, its relationship with adjacent structures, and temporal changes in the tumors. In current workflows, clinicians must manually specify the transfer function (TF), view-point (camera), clipping planes, and other visual parameters. Another obstacle for the adoption of DVR to the medical domain is the ever increasing volume of imaging data. The advancement of imaging acquisition techniques has led to a rapid expansion in the size of the data, in the forms of higher resolutions, temporal imaging acquisition to track treatment responses over time, and an increase in the number of imaging modalities that are used for a single procedure. The manual specification of the rendering parameters under these circumstances is very challenging. This thesis proposes a set of innovative methods that visualize important features in multi-dimensional and multi-modality medical images by automatically or semi-automatically optimizing the rendering parameters. Our methods enable visualizations necessary for the diagnostic procedure in which 2D slice of interest (SOI) can be augmented with 3D anatomical contextual information to provide accurate spatial localization of 2D features in the SOI; the rendering parameters are automatically computed to guarantee the visibility of 3D features; and changes in 3D features can be tracked in temporal data under the constraint of consistent contextual information. We also present a method for the efficient computation of visibility histograms (VHs) using adaptive binning, which allows our optimal DVR to be automated and visualized in real-time. We evaluated our methods by producing visualizations for a variety of clinically relevant scenarios and imaging data sets. We also examined the computational performance of our methods for these scenarios

    Feature-driven Volume Visualization of Medical Imaging Data

    Get PDF
    Direct volume rendering (DVR) is a volume visualization technique that has been proved to be a very powerful tool in many scientific visualization domains. Diagnostic medical imaging is one such domain in which DVR provides new capabilities for the analysis of complex cases and improves the efficiency of image interpretation workflows. However, the full potential of DVR in the medical domain has not yet been realized. A major obstacle for a better integration of DVR in the medical domain is the time-consuming process to optimize the rendering parameters that are needed to generate diagnostically relevant visualizations in which the important features that are hidden in image volumes are clearly displayed, such as shape and spatial localization of tumors, its relationship with adjacent structures, and temporal changes in the tumors. In current workflows, clinicians must manually specify the transfer function (TF), view-point (camera), clipping planes, and other visual parameters. Another obstacle for the adoption of DVR to the medical domain is the ever increasing volume of imaging data. The advancement of imaging acquisition techniques has led to a rapid expansion in the size of the data, in the forms of higher resolutions, temporal imaging acquisition to track treatment responses over time, and an increase in the number of imaging modalities that are used for a single procedure. The manual specification of the rendering parameters under these circumstances is very challenging. This thesis proposes a set of innovative methods that visualize important features in multi-dimensional and multi-modality medical images by automatically or semi-automatically optimizing the rendering parameters. Our methods enable visualizations necessary for the diagnostic procedure in which 2D slice of interest (SOI) can be augmented with 3D anatomical contextual information to provide accurate spatial localization of 2D features in the SOI; the rendering parameters are automatically computed to guarantee the visibility of 3D features; and changes in 3D features can be tracked in temporal data under the constraint of consistent contextual information. We also present a method for the efficient computation of visibility histograms (VHs) using adaptive binning, which allows our optimal DVR to be automated and visualized in real-time. We evaluated our methods by producing visualizations for a variety of clinically relevant scenarios and imaging data sets. We also examined the computational performance of our methods for these scenarios

    Debris Tracking In A Semistable Background

    Get PDF
    Object Tracking plays a very pivotal role in many computer vision applications such as video surveillance, human gesture recognition and object based video compressions such as MPEG-4. Automatic detection of any moving object and tracking its motion is always an important topic of computer vision and robotic fields. This thesis deals with the problem of detecting the presence of debris or any other unexpected objects in footage obtained during spacecraft launches, and this poses a challenge because of the non-stationary background. When the background is stationary, moving objects can be detected by frame differencing. Therefore there is a need for background stabilization before tracking any moving object in the scene. Here two problems are considered and in both footage from Space shuttle launch is considered with the objective to track any debris falling from the Shuttle. The proposed method registers two consecutive frames using FFT based image registration where the amount of transformation parameters (translation, rotation) is calculated automatically. This information is the next passed to a Kalman filtering stage which produces a mask image that is used to find high intensity areas which are of potential interest
    • …
    corecore