4 research outputs found
MRI to X-ray mammography registration using a volume-preserving affine transformation.
Item does not contain fulltextX-ray mammography is routinely used in national screening programmes and as a clinical diagnostic tool. Magnetic Resonance Imaging (MRI) is commonly used as a complementary modality, providing functional information about the breast and a 3D image that can overcome ambiguities caused by the superimposition of fibro-glandular structures associated with X-ray imaging. Relating findings between these modalities is a challenging task however, due to the different imaging processes involved and the large deformation that the breast undergoes. In this work we present a registration method to determine spatial correspondence between pairs of MR and X-ray images of the breast, that is targeted for clinical use. We propose a generic registration framework which incorporates a volume-preserving affine transformation model and validate its performance using routinely acquired clinical data. Experiments on simulated mammograms from 8 volunteers produced a mean registration error of 3.8+/-1.6mm for a mean of 12 manually identified landmarks per volume. When validated using 57 lesions identified on routine clinical CC and MLO mammograms (n=113 registration tasks) from 49 subjects the median registration error was 13.1mm. When applied to the registration of an MR image to CC and MLO mammograms of a patient with a localisation clip, the mean error was 8.9mm. The results indicate that an intensity based registration algorithm, using a relatively simple transformation model, can provide radiologists with a clinically useful tool for breast cancer diagnosis.1 juli 201
Automatic correspondence between 2D and 3D images of the breast
Radiologists often need to localise corresponding findings in different images of the breast, such as Magnetic Resonance Images and X-ray mammograms. However, this is a difficult task, as one is a volume and the other a projection image. In addition, the appearance of breast tissue structure can vary significantly between them. Some breast regions are often obscured in an X-ray, due to its projective nature and the superimposition of normal glandular tissue. Automatically determining correspondences between the two modalities could assist radiologists in the detection, diagnosis and surgical planning of breast cancer. This thesis addresses the problems associated with the automatic alignment of 3D and 2D breast images and presents a generic framework for registration that uses the structures within the breast for alignment, rather than surrogates based on the breast outline or nipple position. The proposed algorithm can adapt to incorporate different types of transformation models, in order to capture the breast deformation between modalities. The framework was validated on clinical MRI and X-ray mammography cases using both simple geometrical models, such as the affine, and also more complex ones that are based on biomechanical simulations. The results showed that the proposed framework with the affine transformation model can provide clinically useful accuracy (13.1mm when tested on 113 registration tasks). The biomechanical transformation models provided further improvement when applied on a smaller dataset. Our technique was also tested on determining corresponding findings in multiple X-ray images (i.e. temporal or CC to MLO) for a given subject using the 3D information provided by the MRI. Quantitative results showed that this approach outperforms 2D transformation models that are typically used for this task. The results indicate that this pipeline has the potential to provide a clinically useful tool for radiologists
Automated Morphometric Characterization of the Cerebral Cortex for the Developing and Ageing Brain
Morphometric characterisation of the cerebral cortex can provide information about patterns of brain development and ageing and may be relevant for diagnosis and estimation of the progression of diseases such as Alzheimer's, Huntington's, and schizophrenia. Therefore, understanding and describing the differences between populations in terms of structural volume, shape and thickness is of critical importance. Methodologically, due to data quality, presence of noise, PV effects, limited resolution and pathological variability, the automated, robust and time-consistent estimation of morphometric features is still an unsolved problem. This thesis focuses on the development of tools for robust cross-sectional and longitudinal morphometric characterisation of the human cerebral cortex. It describes techniques for tissue segmentation, structural and morphometric characterisation, cross-sectional and longitudinally cortical thickness estimation from serial MRI images in both adults and neonates. Two new probabilistic brain tissue segmentation techniques are introduced in order to accurately and robustly segment the brain of elderly and neonatal subjects, even in the presence of marked pathology. Two other algorithms based on the concept of multi-atlas segmentation propagation and fusion are also introduced in order to parcelate the brain into its multiple composing structures with the highest possible segmentation accuracy. Finally, we explore the use of the Khalimsky cubic complex framework for the extraction of topologically correct thickness measurements from probabilistic segmentations without explicit parametrisation of the edge. A longitudinal extension of this method is also proposed. The work presented in this thesis has been extensively validated on elderly and neonatal data from several scanners, sequences and protocols. The proposed algorithms have also been successfully applied to breast and heart MRI, neck and colon CT and also to small animal imaging. All the algorithms presented in this thesis are available as part of the open-source package NiftySeg
Information Fusion of Magnetic Resonance Images and Mammographic Scans for Improved Diagnostic Management of Breast Cancer
Medical imaging is critical to non-invasive diagnosis and treatment of a wide spectrum
of medical conditions. However, different modalities of medical imaging employ/apply
di erent contrast mechanisms and, consequently, provide different depictions of bodily
anatomy. As a result, there is a frequent problem where the same pathology can be
detected by one type of medical imaging while being missed by others. This problem brings
forward the importance of the development of image processing tools for integrating the
information provided by different imaging modalities via the process of information fusion.
One particularly important example of clinical application of such tools is in the diagnostic
management of breast cancer, which is a prevailing cause of cancer-related mortality in
women. Currently, the diagnosis of breast cancer relies mainly on X-ray mammography and
Magnetic Resonance Imaging (MRI), which are both important throughout different stages
of detection, localization, and treatment of the disease. The sensitivity of mammography,
however, is known to be limited in the case of relatively dense breasts, while contrast enhanced
MRI tends to yield frequent 'false alarms' due to its high sensitivity. Given this
situation, it is critical to find reliable ways of fusing the mammography and MRI scans in
order to improve the sensitivity of the former while boosting the specificity of the latter.
Unfortunately, fusing the above types of medical images is known to be a difficult computational
problem. Indeed, while MRI scans are usually volumetric (i.e., 3-D), digital
mammograms are always planar (2-D). Moreover, mammograms are invariably acquired
under the force of compression paddles, thus making the breast anatomy undergo sizeable
deformations. In the case of MRI, on the other hand, the breast is rarely constrained and
imaged in a pendulous state. Finally, X-ray mammography and MRI exploit two completely
di erent physical mechanisms, which produce distinct diagnostic contrasts which
are related in a non-trivial way. Under such conditions, the success of information fusion
depends on one's ability to establish spatial correspondences between mammograms
and their related MRI volumes in a cross-modal cross-dimensional (CMCD) setting in the
presence of spatial deformations (+SD). Solving the problem of information fusion in the
CMCD+SD setting is a very challenging analytical/computational problem, still in need
of efficient solutions.
In the literature, there is a lack of a generic and consistent solution to the problem of
fusing mammograms and breast MRIs and using their complementary information. Most
of the existing MRI to mammogram registration techniques are based on a biomechanical
approach which builds a speci c model for each patient to simulate the effect of mammographic
compression. The biomechanical model is not optimal as it ignores the common
characteristics of breast deformation across different cases. Breast deformation is essentially the planarization of a 3-D volume between two paddles, which is common in all
patients. Regardless of the size, shape, or internal con guration of the breast tissue, one
can predict the major part of the deformation only by considering the geometry of the
breast tissue. In contrast with complex standard methods relying on patient-speci c biomechanical
modeling, we developed a new and relatively simple approach to estimate the
deformation and nd the correspondences. We consider the total deformation to consist of
two components: a large-magnitude global deformation due to mammographic compression
and a residual deformation of relatively smaller amplitude. We propose a much simpler
way of predicting the global deformation which compares favorably to FEM in terms of
its accuracy. The residual deformation, on the other hand, is recovered in a variational
framework using an elastic transformation model.
The proposed algorithm provides us with a computational pipeline that takes breast
MRIs and mammograms as inputs and returns the spatial transformation which establishes
the correspondences between them. This spatial transformation can be applied in different
applications, e.g., producing 'MRI-enhanced' mammograms (which is capable of improving
the quality of surgical care) and correlating between different types of mammograms.
We investigate the performance of our proposed pipeline on the application of enhancing
mammograms by means of MRIs and we have shown improvements over the state of the
art