14 research outputs found
Automated Deformable Mapping Methods to Relate Corresponding Lesions in 3D X-ray and 3D Ultrasound Breast Images
Mammography is the current standard imaging method for detecting breast cancer by using x-rays to produce 2D images of the breast. However, with mammography alone there is difficulty determining whether a lesion is benign or malignant and reduced sensitivity to detecting lesions in dense breasts. Ultrasound imaging used in conjunction with mammography has shown valuable contributions for lesion characterization by differentiating between solid and cystic lesions. Conventional breast ultrasound has high false positive rates; however, it has shown improved abilities to detect lesions in dense breasts. Breast ultrasound is typically performed freehand to produce anterior-to-posterior 2D images in a different geometry (supine) than mammography (upright). This difference in geometries is likely responsible for the finding that at least 10% of the time lesions found in the ultrasound images do not correspond with lesions found in mammograms. To solve this problem additional imaging techniques must be investigated to aid a radiologist in identifying corresponding lesions in the two modalities to ensure early detection of a potential cancer.
This dissertation describes and validates automated deformable mapping methods to register and relate corresponding lesions between multi-modality images acquired using 3D mammography (Digital Breast Tomosynthesis (DBT) and dedicated breast Computed Tomography (bCT)) and 3D ultrasound (Automated Breast Ultrasound (ABUS)). The methodology involves the use of finite element modeling and analysis to simulate the differences in compression and breast orientation to better align lesions acquired from images from these modalities. Preliminary studies were performed using several multimodality compressible breast phantoms to determine breast lesion registrations between: i) cranio-caudal (CC) and mediolateral oblique (MLO) DBT views and ABUS, ii) simulated bCT and DBT (CC and MLO views), and iii) simulated bCT and ABUS. Distances between the centers of masses, dCOM, of corresponding lesions were used to assess the deformable mapping method.
These phantom studies showed the potential to apply this technique for real breast lesions with mean dCOM registration values as low as 4.9 ± 2.4 mm for DBT (CC view) mapped to ABUS, 9.3 ± 2.8 mm for DBT (MLO view) mapped to ABUS, 4.8 ± 2.4 mm for bCT mapped to ABUS, 5.0 ± 2.2 mm for bCT mapped to DBT (CC view), and 4.7 ± 2.5 mm for bCT mapped to DBT (MLO view). All of the phantom studies showed that using external fiducial markers helped improve the registration capability of the deformable mapping algorithm. An IRB-approved proof-of-concept study was performed with patient volunteers to validate the deformable registration method on 5 patient datasets with a total of up to 7 lesions for DBT (CC and MLO views) to ABUS registration. Resulting dCOM’s using the deformable method showed statistically significant improvements over rigid registration techniques with a mean dCOM of 11.6 ± 5.3 mm for DBT (CC view) mapped to ABUS and a mean dCOM of 12.3 ± 4.8 mm for DBT (MLO view) mapped to ABUS.
The present work demonstrates the potential for using deformable registration techniques to relate corresponding lesions in 3D x-ray and 3D ultrasound images. This methodology should improve a radiologists’ characterization of breast lesions which can reduce patient callbacks, misdiagnoses, additional patient dose and unnecessary biopsies. Additionally, this technique can save a radiologist time in navigating 3D image volumes and the one-to-one lesion correspondence between modalities can aid in the early detection of breast malignancies.PHDNuclear Engineering & Radiological SciencesUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/150042/1/canngree_1.pd
Breast Cancer Detection on Automated 3D Ultrasound with Co-localized 3D X-ray.
X-ray mammography is the gold standard for detecting breast cancer while B-mode ultrasound is employed as its diagnostic complement. This dissertation aimed at acquiring a high quality, high-resolution 3D automated ultrasound image of the entire breast at current diagnostic frequencies, in the same geometry as mammography and its 3D equivalent, digital breast tomosynthesis, and to extend and help test its utility with co-localization. The first objective of this work was to engineer solutions to overcome some challenges inherent in acquiring complete automated ultrasound of the breast and minimizing patient motion during scans. Automated whole-breast ultrasound that can be registered to X-Ray imaging eliminates the uncertainty associated with hand-held ultrasound. More than 170 subjects were imaged using superior coupling agents tested during the course of this study. At least one radiologist rated the usefulness of X-Ray and ultrasound co-localization as high in the majority of our study cases. The second objective was to accurately register tomosynthesis image volumes of the breast, making the detection of tissue growth and deformation over time a realistic possibility. It was found for the first time to our knowledge that whole breast digital tomosynthesis image volumes can be spatially registered with an error tolerance of 2 mm, which is 10% of the average size of cancers in a screening population. The third and final objective involved the registration and fusion of 3D ultrasound image volumes acquired from opposite sides of the breast in the mammographic geometry, a novel technique that improves the volumetric resolution of high frequency ultrasound but poses unique problems. To improve the accuracy and speed of registration, direction-dependent artifacts should be eliminated. Further, it is necessary to identify other regions, usually at greater depths, that contain little or misleading information. Machine learning, principal component analysis and speckle reducing anisotropic diffusion were tested in this context. We showed that machine learning classifiers can identify regions of corrupted data accurately on a custom breast-mimicking phantom, and also that they can identify specific artifacts in-vivo. Initial registrations of phantom image sets with many regions of artifacts removed provided robust results as compared to the original datasets.Ph.D.Biomedical EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/78947/1/sumedha_1.pd
Investigating the role of machine learning and deep learning techniques in medical image segmentation
openThis work originates from the growing interest of the medical imaging community in the application of
machine learning techniques and, from deep learning to improve the accuracy of cancerscreening. The thesis
is structured into two different tasks.
In the first part, magnetic resonance images were analysed in order to support clinical experts in the
treatment of patients with brain tumour metastases (BM). The main topic related to this study was to
investigate whether BM segmentation may be approached successfully by two supervised ML classifiers
belonging to feature-based and deep learning approaches, respectively. SVM and V-Net Convolutional Neural
Network model are selected from the literature as representative of the two approaches.
The second task related to this thesisis illustrated the development of a deep learning study aimed to process
and classify lesions in mammograms with the use of slender neural networks. Mammography has a central
role in screening and diagnosis of breast lesions. Deep Convolutional Neural Networks have shown a great
potentiality to address the issue of early detection of breast cancer with an acceptable level of accuracy and
reproducibility. A traditional convolution network was compared with a novel one obtained making use of
much more efficient depth wise separable convolution layers.
As a final goal to integrate the system developed in clinical practice, for both fields studied, all the Medical
Imaging and Pattern Recognition algorithmic solutions have been integrated into a MATLAB® software
packageopenInformatica e matematica del calcologonella gloriaGonella, Glori
Case series of breast fillers and how things may go wrong: radiology point of view
INTRODUCTION: Breast augmentation is a procedure opted by women to overcome sagging
breast due to breastfeeding or aging as well as small breast size. Recent years have shown the
emergence of a variety of injectable materials on market as breast fillers. These injectable
breast fillers have swiftly gained popularity among women, considering the minimal
invasiveness of the procedure, nullifying the need for terrifying surgery. Little do they know
that the procedure may pose detrimental complications, while visualization of breast
parenchyma infiltrated by these fillers is also deemed substandard; posing diagnostic
challenges. We present a case series of three patients with prior history of hyaluronic acid and
collagen breast injections.
REPORT: The first patient is a 37-year-old lady who presented to casualty with worsening
shortness of breath, non-productive cough, central chest pain; associated with fever and chills
for 2-weeks duration. The second patient is a 34-year-old lady who complained of cough, fever
and haemoptysis; associated with shortness of breath for 1-week duration. CT in these cases
revealed non thrombotic wedge-shaped peripheral air-space densities.
The third patient is a 37‐year‐old female with right breast pain, swelling and redness for 2-
weeks duration. Previous collagen breast injection performed 1 year ago had impeded
sonographic visualization of the breast parenchyma. MRI breasts showed multiple non-
enhancing round and oval shaped lesions exhibiting fat intensity.
CONCLUSION: Radiologists should be familiar with the potential risks and hazards as well
as limitations of imaging posed by breast fillers such that MRI is required as problem-solving
tool
Characterization of alar ligament on 3.0T MRI: a cross-sectional study in IIUM Medical Centre, Kuantan
INTRODUCTION: The main purpose of the study is to compare the normal anatomy of alar
ligament on MRI between male and female. The specific objectives are to assess the prevalence
of alar ligament visualized on MRI, to describe its characteristics in term of its course, shape and
signal homogeneity and to find differences in alar ligament signal intensity between male and
female. This study also aims to determine the association between the heights of respondents
with alar ligament signal intensity and dimensions.
MATERIALS & METHODS: 50 healthy volunteers were studied on 3.0T MR scanner
Siemens Magnetom Spectra using 2-mm proton density, T2 and fat-suppression sequences. Alar
ligament is depicted in 3 planes and the visualization and variability of the ligament courses,
shapes and signal intensity characteristics were determined. The alar ligament dimensions were
also measured.
RESULTS: Alar ligament was best depicted in coronal plane, followed by sagittal and axial
planes. The orientations were laterally ascending in most of the subjects (60%), predominantly
oval in shaped (54%) and 67% showed inhomogenous signal. No significant difference of alar
ligament signal intensity between male and female respondents. No significant association was
found between the heights of the respondents with alar ligament signal intensity and dimensions.
CONCLUSION: Employing a 3.0T MR scanner, the alar ligament is best portrayed on coronal
plane, followed by sagittal and axial planes. However, tremendous variability of alar ligament as
depicted in our data shows that caution needs to be exercised when evaluating alar ligament,
especially during circumstances of injury
AUGMENTED REALITY AND INTRAOPERATIVE C-ARM CONE-BEAM COMPUTED TOMOGRAPHY FOR IMAGE-GUIDED ROBOTIC SURGERY
Minimally-invasive robotic-assisted surgery is a rapidly-growing alternative to traditionally open and laparoscopic procedures; nevertheless, challenges remain. Standard of care derives surgical strategies from preoperative volumetric data (i.e., computed tomography (CT) and magnetic resonance (MR) images) that benefit from the ability of multiple modalities to delineate different anatomical boundaries. However, preoperative images may not reflect a possibly highly deformed perioperative setup or intraoperative deformation. Additionally, in current clinical practice, the correspondence of preoperative plans to the surgical scene is conducted as a mental exercise; thus, the accuracy of this practice is highly dependent on the surgeon’s experience and therefore subject to inconsistencies.
In order to address these fundamental limitations in minimally-invasive robotic surgery, this dissertation combines a high-end robotic C-arm imaging system and a modern robotic surgical platform as an integrated intraoperative image-guided system. We performed deformable registration of preoperative plans to a perioperative cone-beam computed tomography (CBCT), acquired after the patient is positioned for intervention. From the registered surgical plans, we overlaid critical information onto the primary intraoperative visual source, the robotic endoscope, by using augmented reality. Guidance afforded by this system not only uses augmented reality to fuse virtual medical information, but also provides tool localization and other dynamic intraoperative updated behavior in order to present enhanced depth feedback and information to the surgeon. These techniques in guided robotic surgery required a streamlined approach to creating intuitive and effective human-machine interferences, especially in visualization.
Our software design principles create an inherently information-driven modular architecture incorporating robotics and intraoperative imaging through augmented reality. The system's performance is evaluated using phantoms and preclinical in-vivo experiments for multiple applications, including transoral robotic surgery, robot-assisted thoracic interventions, and cocheostomy for cochlear implantation. The resulting functionality, proposed architecture, and implemented methodologies can be further generalized to other C-arm-based image guidance for additional extensions in robotic surgery