Breast Cancer Detection on Automated 3D Ultrasound with Co-localized 3D X-ray.

Abstract

X-ray mammography is the gold standard for detecting breast cancer while B-mode ultrasound is employed as its diagnostic complement. This dissertation aimed at acquiring a high quality, high-resolution 3D automated ultrasound image of the entire breast at current diagnostic frequencies, in the same geometry as mammography and its 3D equivalent, digital breast tomosynthesis, and to extend and help test its utility with co-localization. The first objective of this work was to engineer solutions to overcome some challenges inherent in acquiring complete automated ultrasound of the breast and minimizing patient motion during scans. Automated whole-breast ultrasound that can be registered to X-Ray imaging eliminates the uncertainty associated with hand-held ultrasound. More than 170 subjects were imaged using superior coupling agents tested during the course of this study. At least one radiologist rated the usefulness of X-Ray and ultrasound co-localization as high in the majority of our study cases. The second objective was to accurately register tomosynthesis image volumes of the breast, making the detection of tissue growth and deformation over time a realistic possibility. It was found for the first time to our knowledge that whole breast digital tomosynthesis image volumes can be spatially registered with an error tolerance of 2 mm, which is 10% of the average size of cancers in a screening population. The third and final objective involved the registration and fusion of 3D ultrasound image volumes acquired from opposite sides of the breast in the mammographic geometry, a novel technique that improves the volumetric resolution of high frequency ultrasound but poses unique problems. To improve the accuracy and speed of registration, direction-dependent artifacts should be eliminated. Further, it is necessary to identify other regions, usually at greater depths, that contain little or misleading information. Machine learning, principal component analysis and speckle reducing anisotropic diffusion were tested in this context. We showed that machine learning classifiers can identify regions of corrupted data accurately on a custom breast-mimicking phantom, and also that they can identify specific artifacts in-vivo. Initial registrations of phantom image sets with many regions of artifacts removed provided robust results as compared to the original datasets.PhDBiomedical EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/78947/1/sumedha_1.pd

Similar works

Full text

thumbnail-image

Deep Blue Documents at the University of Michigan

redirect
Last time updated on 25/05/2012

Having an issue?

Is data on this page outdated, violates copyrights or anything else? Report the problem now and we will take corresponding actions after reviewing your request.