19 research outputs found

    Automated Deformable Mapping Methods to Relate Corresponding Lesions in 3D X-ray and 3D Ultrasound Breast Images

    Full text link
    Mammography is the current standard imaging method for detecting breast cancer by using x-rays to produce 2D images of the breast. However, with mammography alone there is difficulty determining whether a lesion is benign or malignant and reduced sensitivity to detecting lesions in dense breasts. Ultrasound imaging used in conjunction with mammography has shown valuable contributions for lesion characterization by differentiating between solid and cystic lesions. Conventional breast ultrasound has high false positive rates; however, it has shown improved abilities to detect lesions in dense breasts. Breast ultrasound is typically performed freehand to produce anterior-to-posterior 2D images in a different geometry (supine) than mammography (upright). This difference in geometries is likely responsible for the finding that at least 10% of the time lesions found in the ultrasound images do not correspond with lesions found in mammograms. To solve this problem additional imaging techniques must be investigated to aid a radiologist in identifying corresponding lesions in the two modalities to ensure early detection of a potential cancer. This dissertation describes and validates automated deformable mapping methods to register and relate corresponding lesions between multi-modality images acquired using 3D mammography (Digital Breast Tomosynthesis (DBT) and dedicated breast Computed Tomography (bCT)) and 3D ultrasound (Automated Breast Ultrasound (ABUS)). The methodology involves the use of finite element modeling and analysis to simulate the differences in compression and breast orientation to better align lesions acquired from images from these modalities. Preliminary studies were performed using several multimodality compressible breast phantoms to determine breast lesion registrations between: i) cranio-caudal (CC) and mediolateral oblique (MLO) DBT views and ABUS, ii) simulated bCT and DBT (CC and MLO views), and iii) simulated bCT and ABUS. Distances between the centers of masses, dCOM, of corresponding lesions were used to assess the deformable mapping method. These phantom studies showed the potential to apply this technique for real breast lesions with mean dCOM registration values as low as 4.9 ± 2.4 mm for DBT (CC view) mapped to ABUS, 9.3 ± 2.8 mm for DBT (MLO view) mapped to ABUS, 4.8 ± 2.4 mm for bCT mapped to ABUS, 5.0 ± 2.2 mm for bCT mapped to DBT (CC view), and 4.7 ± 2.5 mm for bCT mapped to DBT (MLO view). All of the phantom studies showed that using external fiducial markers helped improve the registration capability of the deformable mapping algorithm. An IRB-approved proof-of-concept study was performed with patient volunteers to validate the deformable registration method on 5 patient datasets with a total of up to 7 lesions for DBT (CC and MLO views) to ABUS registration. Resulting dCOM’s using the deformable method showed statistically significant improvements over rigid registration techniques with a mean dCOM of 11.6 ± 5.3 mm for DBT (CC view) mapped to ABUS and a mean dCOM of 12.3 ± 4.8 mm for DBT (MLO view) mapped to ABUS. The present work demonstrates the potential for using deformable registration techniques to relate corresponding lesions in 3D x-ray and 3D ultrasound images. This methodology should improve a radiologists’ characterization of breast lesions which can reduce patient callbacks, misdiagnoses, additional patient dose and unnecessary biopsies. Additionally, this technique can save a radiologist time in navigating 3D image volumes and the one-to-one lesion correspondence between modalities can aid in the early detection of breast malignancies.PHDNuclear Engineering & Radiological SciencesUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/150042/1/canngree_1.pd

    Automatic correspondence between 2D and 3D images of the breast

    Get PDF
    Radiologists often need to localise corresponding findings in different images of the breast, such as Magnetic Resonance Images and X-ray mammograms. However, this is a difficult task, as one is a volume and the other a projection image. In addition, the appearance of breast tissue structure can vary significantly between them. Some breast regions are often obscured in an X-ray, due to its projective nature and the superimposition of normal glandular tissue. Automatically determining correspondences between the two modalities could assist radiologists in the detection, diagnosis and surgical planning of breast cancer. This thesis addresses the problems associated with the automatic alignment of 3D and 2D breast images and presents a generic framework for registration that uses the structures within the breast for alignment, rather than surrogates based on the breast outline or nipple position. The proposed algorithm can adapt to incorporate different types of transformation models, in order to capture the breast deformation between modalities. The framework was validated on clinical MRI and X-ray mammography cases using both simple geometrical models, such as the affine, and also more complex ones that are based on biomechanical simulations. The results showed that the proposed framework with the affine transformation model can provide clinically useful accuracy (13.1mm when tested on 113 registration tasks). The biomechanical transformation models provided further improvement when applied on a smaller dataset. Our technique was also tested on determining corresponding findings in multiple X-ray images (i.e. temporal or CC to MLO) for a given subject using the 3D information provided by the MRI. Quantitative results showed that this approach outperforms 2D transformation models that are typically used for this task. The results indicate that this pipeline has the potential to provide a clinically useful tool for radiologists

    Information Fusion of Magnetic Resonance Images and Mammographic Scans for Improved Diagnostic Management of Breast Cancer

    Get PDF
    Medical imaging is critical to non-invasive diagnosis and treatment of a wide spectrum of medical conditions. However, different modalities of medical imaging employ/apply di erent contrast mechanisms and, consequently, provide different depictions of bodily anatomy. As a result, there is a frequent problem where the same pathology can be detected by one type of medical imaging while being missed by others. This problem brings forward the importance of the development of image processing tools for integrating the information provided by different imaging modalities via the process of information fusion. One particularly important example of clinical application of such tools is in the diagnostic management of breast cancer, which is a prevailing cause of cancer-related mortality in women. Currently, the diagnosis of breast cancer relies mainly on X-ray mammography and Magnetic Resonance Imaging (MRI), which are both important throughout different stages of detection, localization, and treatment of the disease. The sensitivity of mammography, however, is known to be limited in the case of relatively dense breasts, while contrast enhanced MRI tends to yield frequent 'false alarms' due to its high sensitivity. Given this situation, it is critical to find reliable ways of fusing the mammography and MRI scans in order to improve the sensitivity of the former while boosting the specificity of the latter. Unfortunately, fusing the above types of medical images is known to be a difficult computational problem. Indeed, while MRI scans are usually volumetric (i.e., 3-D), digital mammograms are always planar (2-D). Moreover, mammograms are invariably acquired under the force of compression paddles, thus making the breast anatomy undergo sizeable deformations. In the case of MRI, on the other hand, the breast is rarely constrained and imaged in a pendulous state. Finally, X-ray mammography and MRI exploit two completely di erent physical mechanisms, which produce distinct diagnostic contrasts which are related in a non-trivial way. Under such conditions, the success of information fusion depends on one's ability to establish spatial correspondences between mammograms and their related MRI volumes in a cross-modal cross-dimensional (CMCD) setting in the presence of spatial deformations (+SD). Solving the problem of information fusion in the CMCD+SD setting is a very challenging analytical/computational problem, still in need of efficient solutions. In the literature, there is a lack of a generic and consistent solution to the problem of fusing mammograms and breast MRIs and using their complementary information. Most of the existing MRI to mammogram registration techniques are based on a biomechanical approach which builds a speci c model for each patient to simulate the effect of mammographic compression. The biomechanical model is not optimal as it ignores the common characteristics of breast deformation across different cases. Breast deformation is essentially the planarization of a 3-D volume between two paddles, which is common in all patients. Regardless of the size, shape, or internal con guration of the breast tissue, one can predict the major part of the deformation only by considering the geometry of the breast tissue. In contrast with complex standard methods relying on patient-speci c biomechanical modeling, we developed a new and relatively simple approach to estimate the deformation and nd the correspondences. We consider the total deformation to consist of two components: a large-magnitude global deformation due to mammographic compression and a residual deformation of relatively smaller amplitude. We propose a much simpler way of predicting the global deformation which compares favorably to FEM in terms of its accuracy. The residual deformation, on the other hand, is recovered in a variational framework using an elastic transformation model. The proposed algorithm provides us with a computational pipeline that takes breast MRIs and mammograms as inputs and returns the spatial transformation which establishes the correspondences between them. This spatial transformation can be applied in different applications, e.g., producing 'MRI-enhanced' mammograms (which is capable of improving the quality of surgical care) and correlating between different types of mammograms. We investigate the performance of our proposed pipeline on the application of enhancing mammograms by means of MRIs and we have shown improvements over the state of the art

    Infective/inflammatory disorders

    Get PDF

    The radiological investigation of musculoskeletal tumours : chairperson's introduction

    No full text

    Medical Image Registration: Statistical Models of Performance in Relation to the Statistical Characteristics of the Image Data

    Get PDF
    For image-guided interventions, the imaging task often pertains to registering preoperative and intraoperative images within a common coordinate system. While the accuracy of the registration is directly tied to the accuracy of targeting in the intervention (and presumably the success of the medical outcome), there is relatively little quantitative understanding of the fundamental factors that govern image registration accuracy. A statistical framework is presented that relates models of image noise and spatial resolution to the task of registration, giving theoretical limits on registration accuracy and providing guidance for the selection of image acquisition and post-processing parameters. The framework is further shown to model the confounding influence of soft-tissue deformation in rigid image registration — accurately predicting the reduction in registration accuracy and revealing similarity metrics that are robust against such effects. Furthermore, the framework is shown to provide conceptual guidance in the development of a novel CT-to-radiograph registration method that accounts for deformation. The work also examines a learning-based method for deformable registration to investigate how the statistical characteristics of the training data affect the ability of the model to generalize to test data with differing statistical characteristics. The analysis provides insight on the benefits of statistically diverse training data in generalizability of a neural network and is further applied to the development of a learning-based MR-to-CT synthesis method. Overall, the work yields a quantitative approach to theoretically and experimentally relate the accuracy of image registration to the statistical characteristics of the image data, providing a rigorous guide to the development of new registration methods
    corecore