6,824 research outputs found

    Semiautomated Multimodal Breast Image Registration

    Get PDF
    Consideration of information from multiple modalities has been shown to have increased diagnostic power in breast imaging. As a result, new techniques such as microwave imaging continue to be developed. Interpreting these novel image modalities is a challenge, requiring comparison to established techniques such as the gold standard X-ray mammography. However, due to the highly deformable nature of breast tissues, comparison of 3D and 2D modalities is a challenge. To enable this comparison, a registration technique was developed to map features from 2D mammograms to locations in the 3D image space. This technique was developed and tested using magnetic resonance (MR) images as a reference 3D modality, as MR breast imaging is an established technique in clinical practice. The algorithm was validated using a numerical phantom then successfully tested on twenty-four image pairs. Dice's coefficient was used to measure the external goodness of fit, resulting in an excellent overall average of 0.94. Internal agreement was evaluated by examining internal features in consultation with a radiologist, and subjective assessment concludes that reasonable alignment was achieved

    Automatic correspondence between 2D and 3D images of the breast

    Get PDF
    Radiologists often need to localise corresponding findings in different images of the breast, such as Magnetic Resonance Images and X-ray mammograms. However, this is a difficult task, as one is a volume and the other a projection image. In addition, the appearance of breast tissue structure can vary significantly between them. Some breast regions are often obscured in an X-ray, due to its projective nature and the superimposition of normal glandular tissue. Automatically determining correspondences between the two modalities could assist radiologists in the detection, diagnosis and surgical planning of breast cancer. This thesis addresses the problems associated with the automatic alignment of 3D and 2D breast images and presents a generic framework for registration that uses the structures within the breast for alignment, rather than surrogates based on the breast outline or nipple position. The proposed algorithm can adapt to incorporate different types of transformation models, in order to capture the breast deformation between modalities. The framework was validated on clinical MRI and X-ray mammography cases using both simple geometrical models, such as the affine, and also more complex ones that are based on biomechanical simulations. The results showed that the proposed framework with the affine transformation model can provide clinically useful accuracy (13.1mm when tested on 113 registration tasks). The biomechanical transformation models provided further improvement when applied on a smaller dataset. Our technique was also tested on determining corresponding findings in multiple X-ray images (i.e. temporal or CC to MLO) for a given subject using the 3D information provided by the MRI. Quantitative results showed that this approach outperforms 2D transformation models that are typically used for this task. The results indicate that this pipeline has the potential to provide a clinically useful tool for radiologists

    Numerical methods for coupled reconstruction and registration in digital breast tomosynthesis.

    Get PDF
    Digital Breast Tomosynthesis (DBT) provides an insight into the fine details of normal fibroglandular tissues and abnormal lesions by reconstructing a pseudo-3D image of the breast. In this respect, DBT overcomes a major limitation of conventional X-ray mam- mography by reducing the confounding effects caused by the superposition of breast tissue. In a breast cancer screening or diagnostic context, a radiologist is interested in detecting change, which might be indicative of malignant disease. To help automate this task image registration is required to establish spatial correspondence between time points. Typically, images, such as MRI or CT, are first reconstructed and then registered. This approach can be effective if reconstructing using a complete set of data. However, for ill-posed, limited-angle problems such as DBT, estimating the deformation is com- plicated by the significant artefacts associated with the reconstruction, leading to severe inaccuracies in the registration. This paper presents a mathematical framework, which couples the two tasks and jointly estimates both image intensities and the parameters of a transformation. Under this framework, we compare an iterative method and a simultaneous method, both of which tackle the problem of comparing DBT data by combining reconstruction of a pair of temporal volumes with their registration. We evaluate our methods using various computational digital phantoms, uncom- pressed breast MR images, and in-vivo DBT simulations. Firstly, we compare both iter- ative and simultaneous methods to the conventional, sequential method using an affine transformation model. We show that jointly estimating image intensities and parametric transformations gives superior results with respect to reconstruction fidelity and regis- tration accuracy. Also, we incorporate a non-rigid B-spline transformation model into our simultaneous method. The results demonstrate a visually plausible recovery of the deformation with preservation of the reconstruction fidelity

    A Survey on Deep Learning in Medical Image Analysis

    Full text link
    Deep learning algorithms, in particular convolutional networks, have rapidly become a methodology of choice for analyzing medical images. This paper reviews the major deep learning concepts pertinent to medical image analysis and summarizes over 300 contributions to the field, most of which appeared in the last year. We survey the use of deep learning for image classification, object detection, segmentation, registration, and other tasks and provide concise overviews of studies per application area. Open challenges and directions for future research are discussed.Comment: Revised survey includes expanded discussion section and reworked introductory section on common deep architectures. Added missed papers from before Feb 1st 201

    Numerical Approaches for Solving the Combined Reconstruction and Registration of Digital Breast Tomosynthesis

    Get PDF
    Heavy demands on the development of medical imaging modalities for breast cancer detection have been witnessed in the last three decades in an attempt to reduce the mortality associated with the disease. Recently, Digital Breast Tomosynthesis (DBT) shows its promising in the early diagnosis when lesions are small. In particular, it offers potential benefits over X-ray mammography - the current modality of choice for breast screening - of increased sensitivity and specificity for comparable X-ray dose, speed, and cost. An important feature of DBT is that it provides a pseudo-3D image of the breast. This is of particular relevance for heterogeneous dense breasts of young women, which can inhibit detection of cancer using conventional mammography. In the same way that it is difficult to see a bird from the edge of the forest, detecting cancer in a conventional 2D mammogram is a challenging task. Three-dimensional DBT, however, enables us to step through the forest, i.e., the breast, reducing the confounding effect of superimposed tissue and so (potentially) increasing the sensitivity and specificity of cancer detection. The workflow in which DBT would be used clinically, involves two key tasks: reconstruction, to generate a 3D image of the breast, and registration, to enable images from different visits to be compared as is routinely performed by radiologists working with conventional mammograms. Conventional approaches proposed in the literature separate these steps, solving each task independently. This can be effective if reconstructing using a complete set of data. However, for ill-posed limited-angle problems such as DBT, estimating the deformation is difficult because of the significant artefacts associated with DBT reconstructions, leading to severe inaccuracies in the registration. The aim of my work is to find and evaluate methods capable of allying these two tasks, which will enhance the performance of each process as a result. Consequently, I prove that the processes of reconstruction and registration of DBT are not independent but reciprocal. This thesis proposes innovative numerical approaches combining reconstruction of a pair of temporal DBT acquisitions with their registration iteratively and simultaneously. To evaluate the performance of my methods I use synthetic images, breast MRI, and DBT simulations with in-vivo breast compressions. I show that, compared to the conventional sequential method, jointly estimating image intensities and transformation parameters gives superior results with respect to both reconstruction fidelity and registration accuracy

    An open environment CT-US fusion for tissue segmentation during interventional guidance.

    Get PDF
    Therapeutic ultrasound (US) can be noninvasively focused to activate drugs, ablate tumors and deliver drugs beyond the blood brain barrier. However, well-controlled guidance of US therapy requires fusion with a navigational modality, such as magnetic resonance imaging (MRI) or X-ray computed tomography (CT). Here, we developed and validated tissue characterization using a fusion between US and CT. The performance of the CT/US fusion was quantified by the calibration error, target registration error and fiducial registration error. Met-1 tumors in the fat pads of 12 female FVB mice provided a model of developing breast cancer with which to evaluate CT-based tissue segmentation. Hounsfield units (HU) within the tumor and surrounding fat pad were quantified, validated with histology and segmented for parametric analysis (fat: -300 to 0 HU, protein-rich: 1 to 300 HU, and bone: HU>300). Our open source CT/US fusion system differentiated soft tissue, bone and fat with a spatial accuracy of ∼1 mm. Region of interest (ROI) analysis of the tumor and surrounding fat pad using a 1 mm(2) ROI resulted in mean HU of 68±44 within the tumor and -97±52 within the fat pad adjacent to the tumor (p<0.005). The tumor area measured by CT and histology was correlated (r(2) = 0.92), while the area designated as fat decreased with increasing tumor size (r(2) = 0.51). Analysis of CT and histology images of the tumor and surrounding fat pad revealed an average percentage of fat of 65.3% vs. 75.2%, 36.5% vs. 48.4%, and 31.6% vs. 38.5% for tumors <75 mm(3), 75-150 mm(3) and >150 mm(3), respectively. Further, CT mapped bone-soft tissue interfaces near the acoustic beam during real-time imaging. Combined CT/US is a feasible method for guiding interventions by tracking the acoustic focus within a pre-acquired CT image volume and characterizing tissues proximal to and surrounding the acoustic focus

    Medical imaging analysis with artificial neural networks

    Get PDF
    Given that neural networks have been widely reported in the research community of medical imaging, we provide a focused literature survey on recent neural network developments in computer-aided diagnosis, medical image segmentation and edge detection towards visual content analysis, and medical image registration for its pre-processing and post-processing, with the aims of increasing awareness of how neural networks can be applied to these areas and to provide a foundation for further research and practical development. Representative techniques and algorithms are explained in detail to provide inspiring examples illustrating: (i) how a known neural network with fixed structure and training procedure could be applied to resolve a medical imaging problem; (ii) how medical images could be analysed, processed, and characterised by neural networks; and (iii) how neural networks could be expanded further to resolve problems relevant to medical imaging. In the concluding section, a highlight of comparisons among many neural network applications is included to provide a global view on computational intelligence with neural networks in medical imaging
    corecore