44 research outputs found

    Automated segmentation of radiodense tissue in digitized mammograms using a constrained Neyman-Pearson classifier

    Get PDF
    Breast cancer is the second leading cause of cancer related mortality among American women. Mammography screening has emerged as a reliable non-invasive technique for early detection of breast cancer. The radiographic appearance of the female breast consists of radiolucent (dark) regions and radiodense (light) regions due to connective and epithelial tissue. It has been established that the percentage of radiodense tissue in a patient\u27s breast can be used as a marker for predicting breast cancer risk. This thesis presents the design, development and validation of a novel automated algorithm for estimating the percentage of radiodense tissue in a digitized mammogram. The technique involves determining a dynamic threshold for segmenting radiodense indications in mammograms. Both the mammographic image and the threshold are modeled as Gaussian random variables and a constrained Neyman-Pearson criteria has been developed for segmenting radiodense tissue. Promising results have been obtained using the proposed technique. Mammograms have been obtained from an existing cohort of women enrolled in the Family Risk Analysis Program at Fox Chase Cancer Center (FCCC). The proposed technique has been validated using a set of ten images with percentages of radiodense tissue, estimated by a trained radiologist using previously established methods. This work is intended to support a concurrent study at the FCCC exploring the association between dietary patterns and breast cancer risk

    Semi-automated and fully automated mammographic density measurement and breast cancer risk prediction

    Full text link
    The task of breast density quantification is becoming increasingly relevant due to its association with breast cancer risk. In this work, a semi-automated and a fully automated tools to assess breast density from full-field digitized mammograms are presented. The first tool is based on a supervised interactive thresholding procedure for segmenting dense from fatty tissue and is used with a twofold goal: for assessing mammographic density(MD) in a more objective and accurate way than via visual-based methods and for labeling the mammograms that are later employed to train the fully automated tool. Although most automated methods rely on supervised approaches based on a global labeling of the mammogram, the proposed method relies on pixel-level labeling, allowing better tissue classification and density measurement on a continuous scale. The fully automated method presented combines a classification scheme based on local features and thresholding operations that improve the performance of the classifier. A dataset of 655 mammograms was used to test the concordance of both approaches in measuring MD. Three expert radiologists measured MD in each of the mammograms using the semi-automated tool (DM-Scan). It was then measured by the fully automated system and the correlation between both methods was computed. The relation between MD and breast cancer was then analyzed using a case-control dataset consisting of 230 mammograms. The Intraclass Correlation Coefficient (ICC) was used to compute reliability among raters and between techniques. The results obtained showed an average ICC = 0.922 among raters when using the semi-automated tool, whilst the average correlation between the semi-automated and automated measures was ICC = 0.838. In the case-control study, the results obtained showed Odds Ratios (OR) of 1.38 and 1.50 per 10% increase in MD when using the semi-automated and fully automated approaches respectively. It can therefore be concluded that the automated and semi-automated MD assessments present a good correlation. Both the methods also found an association between MD and breast cancer risk, which warrants the proposed tools for breast cancer risk prediction and clinical decision making. A full version of the DM-Scan is freely available. (C) 2014 Elsevier Ireland Ltd. All rights reserved.This work was supported by research grants from Gent per Gent Fund (EDEMAC Project); Spain's Health Research Fund (Fondo de Investigacion Santiaria) (PI060386 & FIS PS09/00790); Spanish MICINN grants TIN2009-14205-C04-02 and Consolider-Ingenio 2010: MIPRCV (CSD2007-00018); Spanish Federation of Breast Cancer Patients (Federacion Espanola de Cancer de Mama) (FECMA 485 EPY 1170-10). The English revision of this paper was funded by the Universitat Politecnica de Valencia, Spain.Llobet Azpitarte, R.; Pollán, M.; Antón Guirao, J.; Miranda-García, J.; Casals El Busto, M.; Martinez Gomez, I.; Ruiz Perales, F.... (2014). Semi-automated and fully automated mammographic density measurement and breast cancer risk prediction. Computer Methods and Programs in Biomedicine. 116(2):105-115. https://doi.org/10.1016/j.cmpb.2014.01.021S105115116

    Analyzing the breast tissue in mammograms using deep learning

    Get PDF
    La densitat mamogràfica de la mama (MBD) reflecteix la quantitat d'àrea fibroglandular del teixit mamari que apareix blanca i brillant a les mamografies, comunament coneguda com a densitat percentual de la mama (PD%). El MBD és un factor de risc per al càncer de mama i un factor de risc per emmascarar tumors. Tot i això, l'estimació precisa de la DMO amb avaluació visual continua sent un repte a causa del contrast feble i de les variacions significatives en els teixits grassos de fons en les mamografies. A més, la interpretació correcta de les imatges de mamografia requereix experts mèdics altament capacitats: És difícil, laboriós, car i propens a errors. No obstant això, el teixit mamari dens pot dificultar la identificació del càncer de mama i associar-se amb un risc més gran de càncer de mama. Per exemple, s'ha informat que les dones amb una alta densitat mamària en comparació amb les dones amb una densitat mamària baixa tenen un risc de quatre a sis vegades més gran de desenvolupar la malaltia. La clau principal de la computació de densitat de mama i la classificació de densitat de mama és detectar correctament els teixits densos a les imatges mamogràfiques. S'han proposat molts mètodes per estimar la densitat mamària; no obstant això, la majoria no estan automatitzats. A més, s'han vist greument afectats per la baixa relació senyal-soroll i la variabilitat de la densitat en aparença i textura. Seria més útil tenir un sistema de diagnòstic assistit per ordinador (CAD) per ajudar el metge a analitzar-lo i diagnosticar-lo automàticament. El desenvolupament actual de mètodes daprenentatge profund ens motiva a millorar els sistemes actuals danàlisi de densitat mamària. L'enfocament principal de la present tesi és desenvolupar un sistema per automatitzar l'anàlisi de densitat de la mama ( tal com; Segmentació de densitat de mama (BDS), percentatge de densitat de mama (BDP) i classificació de densitat de mama (BDC) ), utilitzant tècniques d'aprenentatge profund i aplicant-la a les mamografies temporals després del tractament per analitzar els canvis de densitat de mama per trobar un pacient perillós i sospitós.La densidad mamográfica de la mama (MBD) refleja la cantidad de área fibroglandular del tejido mamario que aparece blanca y brillante en las mamografías, comúnmente conocida como densidad porcentual de la mama (PD%). El MBD es un factor de riesgo para el cáncer de mama y un factor de riesgo para enmascarar tumores. Sin embargo, la estimación precisa de la DMO con evaluación visual sigue siendo un reto debido al contraste débil y a las variaciones significativas en los tejidos grasos de fondo en las mamografías. Además, la interpretación correcta de las imágenes de mamografía requiere de expertos médicos altamente capacitados: Es difícil, laborioso, caro y propenso a errores. Sin embargo, el tejido mamario denso puede dificultar la identificación del cáncer de mama y asociarse con un mayor riesgo de cáncer de mama. Por ejemplo, se ha informado que las mujeres con una alta densidad mamaria en comparación con las mujeres con una densidad mamaria baja tienen un riesgo de cuatro a seis veces mayor de desarrollar la enfermedad. La clave principal de la computación de densidad de mama y la clasificación de densidad de mama es detectar correctamente los tejidos densos en las imágenes mamográficas. Se han propuesto muchos métodos para la estimación de la densidad mamaria; sin embargo, la mayoría de ellos no están automatizados. Además, se han visto gravemente afectados por la baja relación señal-ruido y la variabilidad de la densidad en apariencia y textura. Sería más útil disponer de un sistema de diagnóstico asistido por ordenador (CAD) para ayudar al médico a analizarlo y diagnosticarlo automáticamente. El desarrollo actual de métodos de aprendizaje profundo nos motiva a mejorar los sistemas actuales de análisis de densidad mamaria. El enfoque principal de la presente tesis es desarrollar un sistema para automatizar el análisis de densidad de la mama ( tal como; Segmentación de densidad de mama (BDS), porcentaje de densidad de mama (BDP) y clasificación de densidad de mama (BDC)), utilizando técnicas de aprendizaje profundo y aplicándola en las mamografías temporales después del tratamiento para analizar los cambios de densidad de mama para encontrar un paciente peligroso y sospechoso.Mammographic breast density (MBD) reflects the amount of fibroglandular breast tissue area that appears white and bright on mammograms, commonly referred to as breast percent density (PD%). MBD is a risk factor for breast cancer and a risk factor for masking tumors. However, accurate MBD estimation with visual assessment is still a challenge due to faint contrast and significant variations in background fatty tissues in mammograms. In addition, correctly interpreting mammogram images requires highly trained medical experts: it is difficult, time-consuming, expensive, and error-prone. Nevertheless, dense breast tissue can make it harder to identify breast cancer and be associated with an increased risk of breast cancer. For example, it has been reported that women with a high breast density compared to women with a low breast density have a four- to six-fold increased risk of developing the disease. The primary key of breast density computing and breast density classification is to detect the dense tissues in the mammographic images correctly. Many methods have been proposed for breast density estimation; however, most are not automated. Besides, they have been badly affected by low signal-to-noise ratio and variability of density in appearance and texture. It would be more helpful to have a computer-aided diagnosis (CAD) system to assist the doctor analyze and diagnosing it automatically. Current development in deep learning methods motivates us to improve current breast density analysis systems. The main focus of the present thesis is to develop a system for automating the breast density analysis ( such as; breast density segmentation(BDS), breast density percentage (BDP), and breast density classification ( BDC)), using deep learning techniques and applying it on the temporal mammograms after treatment for analyzing the breast density changes to find a risky and suspicious patient

    A deep learning system to obtain the optimal parameters for a threshold-based breast and dense tissue segmentation

    Full text link
    [EN] Background and Objective: Breast cancer is the most frequent cancer in women. The Spanish healthcare network established population-based screening programs in all Autonomous Communities, where mammograms of asymptomatic women are taken with early diagnosis purposes. Breast density assessed from digital mammograms is a biomarker known to be related to a higher risk to develop breast cancer. It is thus crucial to provide a reliable method to measure breast density from mammograms. Furthermore the complete automation of this segmentation process is becoming fundamental as the amount of mammograms increases every day. Important challenges are related with the differences in images from different devices and the lack of an objective gold standard. This paper presents a fully automated framework based on deep learning to estimate the breast density. The framework covers breast detection, pectoral muscle exclusion, and fibroglandular tissue segmentation. Methods: A multi-center study, composed of 1785 women whose "for presentation" mammograms were segmented by two experienced radiologists. A total of 4992 of the 6680 mammograms were used as training corpus and the remaining (1688) formed the test corpus. This paper presents a histogram normalization step that smoothed the difference between acquisition, a regression architecture that learned segmentation parameters as intrinsic image features and a loss function based on the DICE score. Results: The results obtained indicate that the level of concordance (DICE score) reached by the two radiologists (0.77) was also achieved by the automated framework when it was compared to the closest breast segmentation from the radiologists. For the acquired with the highest quality device, the DICE score per acquisition device reached 0.84, while the concordance between radiologists was 0.76. Conclusions: An automatic breast density estimator based on deep learning exhibits similar performance when compared with two experienced radiologists. It suggests that this system could be used to support radiologists to ease its work.This work was partially funded by Generalitat Valenciana through I+D IVACE (Valencian Institute of Business Competitiviness) and GVA (European Regional Development Fund) supports under the project IMAMCN/2019/1, and by Carlos III Institute of Health under the project DTS15/00080.Perez-Benito, FJ.; Signol, F.; Perez-Cortes, J.; Fuster Bagetto, A.; Pollan, M.; Pérez-Gómez, B.; Salas-Trejo, D.... (2020). A deep learning system to obtain the optimal parameters for a threshold-based breast and dense tissue segmentation. Computer Methods and Programs in Biomedicine. 195:123-132. https://doi.org/10.1016/j.cmpb.2020.105668S123132195Kuhl, C. K. (2015). The Changing World of Breast Cancer. Investigative Radiology, 50(9), 615-628. doi:10.1097/rli.0000000000000166Boyd, N. F., Rommens, J. M., Vogt, K., Lee, V., Hopper, J. L., Yaffe, M. J., & Paterson, A. D. (2005). Mammographic breast density as an intermediate phenotype for breast cancer. The Lancet Oncology, 6(10), 798-808. doi:10.1016/s1470-2045(05)70390-9Assi, V., Warwick, J., Cuzick, J., & Duffy, S. W. (2011). Clinical and epidemiological issues in mammographic density. Nature Reviews Clinical Oncology, 9(1), 33-40. doi:10.1038/nrclinonc.2011.173Oliver, A., Freixenet, J., Marti, R., Pont, J., Perez, E., Denton, E. R. E., & Zwiggelaar, R. (2008). A Novel Breast Tissue Density Classification Methodology. IEEE Transactions on Information Technology in Biomedicine, 12(1), 55-65. doi:10.1109/titb.2007.903514Pérez-Benito, F. J., Signol, F., Pérez-Cortés, J.-C., Pollán, M., Pérez-Gómez, B., Salas-Trejo, D., … LLobet, R. (2019). Global parenchymal texture features based on histograms of oriented gradients improve cancer development risk estimation from healthy breasts. Computer Methods and Programs in Biomedicine, 177, 123-132. doi:10.1016/j.cmpb.2019.05.022Ciatto, S., Houssami, N., Apruzzese, A., Bassetti, E., Brancato, B., Carozzi, F., … Scorsolini, A. (2005). Categorizing breast mammographic density: intra- and interobserver reproducibility of BI-RADS density categories. The Breast, 14(4), 269-275. doi:10.1016/j.breast.2004.12.004Skaane, P. (2009). Studies comparing screen-film mammography and full-field digital mammography in breast cancer screening: Updated review. Acta Radiologica, 50(1), 3-14. doi:10.1080/02841850802563269Van der Waal, D., den Heeten, G. J., Pijnappel, R. M., Schuur, K. H., Timmers, J. M. H., Verbeek, A. L. M., & Broeders, M. J. M. (2015). Comparing Visually Assessed BI-RADS Breast Density and Automated Volumetric Breast Density Software: A Cross-Sectional Study in a Breast Cancer Screening Setting. PLOS ONE, 10(9), e0136667. doi:10.1371/journal.pone.0136667Kim, S. H., Lee, E. H., Jun, J. K., Kim, Y. M., Chang, Y.-W., … Lee, J. H. (2019). Interpretive Performance and Inter-Observer Agreement on Digital Mammography Test Sets. Korean Journal of Radiology, 20(2), 218. doi:10.3348/kjr.2018.0193Miotto, R., Wang, F., Wang, S., Jiang, X., & Dudley, J. T. (2017). Deep learning for healthcare: review, opportunities and challenges. Briefings in Bioinformatics, 19(6), 1236-1246. doi:10.1093/bib/bbx044LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521(7553), 436-444. doi:10.1038/nature14539Hinton, G., Deng, L., Yu, D., Dahl, G., Mohamed, A., Jaitly, N., … Kingsbury, B. (2012). Deep Neural Networks for Acoustic Modeling in Speech Recognition: The Shared Views of Four Research Groups. IEEE Signal Processing Magazine, 29(6), 82-97. doi:10.1109/msp.2012.2205597Wang, J., Chen, Y., Hao, S., Peng, X., & Hu, L. (2019). Deep learning for sensor-based activity recognition: A survey. Pattern Recognition Letters, 119, 3-11. doi:10.1016/j.patrec.2018.02.010Helmstaedter, M., Briggman, K. L., Turaga, S. C., Jain, V., Seung, H. S., & Denk, W. (2013). Connectomic reconstruction of the inner plexiform layer in the mouse retina. Nature, 500(7461), 168-174. doi:10.1038/nature12346Lee, K., Turner, N., Macrina, T., Wu, J., Lu, R., & Seung, H. S. (2019). Convolutional nets for reconstructing neural circuits from brain images acquired by serial section electron microscopy. Current Opinion in Neurobiology, 55, 188-198. doi:10.1016/j.conb.2019.04.001Leung, M. K. K., Xiong, H. Y., Lee, L. J., & Frey, B. J. (2014). Deep learning of the tissue-regulated splicing code. Bioinformatics, 30(12), i121-i129. doi:10.1093/bioinformatics/btu277Zhou, J., Park, C. Y., Theesfeld, C. L., Wong, A. K., Yuan, Y., Scheckel, C., … Troyanskaya, O. G. (2019). Whole-genome deep-learning analysis identifies contribution of noncoding mutations to autism risk. Nature Genetics, 51(6), 973-980. doi:10.1038/s41588-019-0420-0Kallenberg, M., Petersen, K., Nielsen, M., Ng, A. Y., Diao, P., Igel, C., … Lillholm, M. (2016). Unsupervised Deep Learning Applied to Breast Density Segmentation and Mammographic Risk Scoring. IEEE Transactions on Medical Imaging, 35(5), 1322-1331. doi:10.1109/tmi.2016.2532122Lecun, Y., Bottou, L., Bengio, Y., & Haffner, P. (1998). Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11), 2278-2324. doi:10.1109/5.726791P. Sermanet, D. Eigen, X. Zhang, M. Mathieu, R. Fergus, Y. LeCun, Overfeat: integrated recognition, localization and detection using convolutional networks, arXiv:1312.6229 (2013).Dice, L. R. (1945). Measures of the Amount of Ecologic Association Between Species. Ecology, 26(3), 297-302. doi:10.2307/1932409Pollán, M., Llobet, R., Miranda-García, J., Antón, J., Casals, M., Martínez, I., … Salas-Trejo, D. (2013). Validation of DM-Scan, a computer-assisted tool to assess mammographic density in full-field digital mammograms. SpringerPlus, 2(1). doi:10.1186/2193-1801-2-242Llobet, R., Pollán, M., Antón, J., Miranda-García, J., Casals, M., Martínez, I., … Pérez-Cortés, J.-C. (2014). Semi-automated and fully automated mammographic density measurement and breast cancer risk prediction. Computer Methods and Programs in Biomedicine, 116(2), 105-115. doi:10.1016/j.cmpb.2014.01.021He, L., Ren, X., Gao, Q., Zhao, X., Yao, B., & Chao, Y. (2017). The connected-component labeling problem: A review of state-of-the-art algorithms. Pattern Recognition, 70, 25-43. doi:10.1016/j.patcog.2017.04.018Wu, K., Otoo, E., & Suzuki, K. (2008). Optimizing two-pass connected-component labeling algorithms. Pattern Analysis and Applications, 12(2), 117-135. doi:10.1007/s10044-008-0109-yShen, R., Yan, K., Xiao, F., Chang, J., Jiang, C., & Zhou, K. (2018). Automatic Pectoral Muscle Region Segmentation in Mammograms Using Genetic Algorithm and Morphological Selection. Journal of Digital Imaging, 31(5), 680-691. doi:10.1007/s10278-018-0068-9Yin, K., Yan, S., Song, C., & Zheng, B. (2018). A robust method for segmenting pectoral muscle in mediolateral oblique (MLO) mammograms. International Journal of Computer Assisted Radiology and Surgery, 14(2), 237-248. doi:10.1007/s11548-018-1867-7James, J. . (2004). The current status of digital mammography. Clinical Radiology, 59(1), 1-10. doi:10.1016/j.crad.2003.08.011Sáez, C., Robles, M., & García-Gómez, J. M. (2016). Stability metrics for multi-source biomedical data based on simplicial projections from probability distribution distances. Statistical Methods in Medical Research, 26(1), 312-336. doi:10.1177/0962280214545122Jain, A. K. (2010). Data clustering: 50 years beyond K-means. Pattern Recognition Letters, 31(8), 651-666. doi:10.1016/j.patrec.2009.09.011Lee, J., & Nishikawa, R. M. (2018). Automated mammographic breast density estimation using a fully convolutional network. Medical Physics, 45(3), 1178-1190. doi:10.1002/mp.12763D.P. Kingma, J. Ba, Adam: a method for stochastic optimization, arXiv:1412.6980 (2014).Lehman, C. D., Yala, A., Schuster, T., Dontchos, B., Bahl, M., Swanson, K., & Barzilay, R. (2019). Mammographic Breast Density Assessment Using Deep Learning: Clinical Implementation. Radiology, 290(1), 52-58. doi:10.1148/radiol.2018180694Bengio, Y., Courville, A., & Vincent, P. (2013). Representation Learning: A Review and New Perspectives. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(8), 1798-1828. doi:10.1109/tpami.2013.50Wu, G., Kim, M., Wang, Q., Munsell, B. C., & Shen, D. (2016). Scalable High-Performance Image Registration Framework by Unsupervised Deep Feature Representations Learning. IEEE Transactions on Biomedical Engineering, 63(7), 1505-1516. doi:10.1109/tbme.2015.2496253T.P. Matthews, S. Singh, B. Mombourquette, J. Su, M.P. Shah, S. Pedemonte, A. Long, D. Maffit, J. Gurney, R.M. Hoil, et al., A multi-site study of a breast density deep learning model for full-field digital mammography and digital breast tomosynthesis exams, arXiv:2001.08383 (2020)

    Fully automated breast boundary and pectoral muscle segmentation in mammograms

    Get PDF
    Breast and pectoral muscle segmentation is an essential pre-processing step for the subsequent processes in Computer Aided Diagnosis (CAD) systems. Estimating the breast and pectoral boundaries is a difficult task especially in mammograms due to artifacts, homogeneity between the pectoral and breast regions, and low contrast along the skin-air boundary. In this paper, a breast boundary and pectoral muscle segmentation method in mammograms is proposed. For breast boundary estimation, we determine the initial breast boundary via thresholding and employ Active Contour Models without edges to search for the actual boundary. A post-processing technique is proposed to correct the overestimated boundary caused by artifacts. The pectoral muscle boundary is estimated using Canny edge detection and a pre-processing technique is proposed to remove noisy edges. Subsequently, we identify five edge features to find the edge that has the highest probability of being the initial pectoral contour and search for the actual boundary via contour growing. The segmentation results for the proposed method are compared with manual segmentations using 322, 208 and 100 mammograms from the Mammographic Image Analysis Society (MIAS), INBreast and Breast Cancer Digital Repository (BCDR) databases, respectively. Experimental results show that the breast boundary and pectoral muscle estimation methods achieved dice similarity coefficients of 98.8% and 97.8% (MIAS), 98.9% and 89.6% (INBreast) and 99.2% and 91.9% (BCDR), respectively

    Spatially varying threshold models for the automated segmentation of radiodense tissue in digitized mammograms

    Get PDF
    The percentage of radiodense (bright) tissue in a mammogram has been correlated to an increased risk of breast cancer. This thesis presents an automated method to quantify the amount of radiodense tissue found in a digitized mammogram. The algorithm employs a radial basis function neural network in order to segment the breast tissue region from the remainder of the X-ray. A spatially varying Neyman-Pearson threshold is used to calculate the percentage of radiodense tissue and compensate for the effects of tissue compression that occurs during a mammography procedure. Results demonstrating the efficacy of the technique are demonstrated by exercising the algorithm on two separate sets of mammograms - one obtained from Brigham Women\u27s Hospital, Harvard Medical School and the other set obtained from Fox Chase Cancer Center and digitized at Rowan University. The results of the algorithm compare favorably with a previously established manual segmentation technique

    Information Fusion of Magnetic Resonance Images and Mammographic Scans for Improved Diagnostic Management of Breast Cancer

    Get PDF
    Medical imaging is critical to non-invasive diagnosis and treatment of a wide spectrum of medical conditions. However, different modalities of medical imaging employ/apply di erent contrast mechanisms and, consequently, provide different depictions of bodily anatomy. As a result, there is a frequent problem where the same pathology can be detected by one type of medical imaging while being missed by others. This problem brings forward the importance of the development of image processing tools for integrating the information provided by different imaging modalities via the process of information fusion. One particularly important example of clinical application of such tools is in the diagnostic management of breast cancer, which is a prevailing cause of cancer-related mortality in women. Currently, the diagnosis of breast cancer relies mainly on X-ray mammography and Magnetic Resonance Imaging (MRI), which are both important throughout different stages of detection, localization, and treatment of the disease. The sensitivity of mammography, however, is known to be limited in the case of relatively dense breasts, while contrast enhanced MRI tends to yield frequent 'false alarms' due to its high sensitivity. Given this situation, it is critical to find reliable ways of fusing the mammography and MRI scans in order to improve the sensitivity of the former while boosting the specificity of the latter. Unfortunately, fusing the above types of medical images is known to be a difficult computational problem. Indeed, while MRI scans are usually volumetric (i.e., 3-D), digital mammograms are always planar (2-D). Moreover, mammograms are invariably acquired under the force of compression paddles, thus making the breast anatomy undergo sizeable deformations. In the case of MRI, on the other hand, the breast is rarely constrained and imaged in a pendulous state. Finally, X-ray mammography and MRI exploit two completely di erent physical mechanisms, which produce distinct diagnostic contrasts which are related in a non-trivial way. Under such conditions, the success of information fusion depends on one's ability to establish spatial correspondences between mammograms and their related MRI volumes in a cross-modal cross-dimensional (CMCD) setting in the presence of spatial deformations (+SD). Solving the problem of information fusion in the CMCD+SD setting is a very challenging analytical/computational problem, still in need of efficient solutions. In the literature, there is a lack of a generic and consistent solution to the problem of fusing mammograms and breast MRIs and using their complementary information. Most of the existing MRI to mammogram registration techniques are based on a biomechanical approach which builds a speci c model for each patient to simulate the effect of mammographic compression. The biomechanical model is not optimal as it ignores the common characteristics of breast deformation across different cases. Breast deformation is essentially the planarization of a 3-D volume between two paddles, which is common in all patients. Regardless of the size, shape, or internal con guration of the breast tissue, one can predict the major part of the deformation only by considering the geometry of the breast tissue. In contrast with complex standard methods relying on patient-speci c biomechanical modeling, we developed a new and relatively simple approach to estimate the deformation and nd the correspondences. We consider the total deformation to consist of two components: a large-magnitude global deformation due to mammographic compression and a residual deformation of relatively smaller amplitude. We propose a much simpler way of predicting the global deformation which compares favorably to FEM in terms of its accuracy. The residual deformation, on the other hand, is recovered in a variational framework using an elastic transformation model. The proposed algorithm provides us with a computational pipeline that takes breast MRIs and mammograms as inputs and returns the spatial transformation which establishes the correspondences between them. This spatial transformation can be applied in different applications, e.g., producing 'MRI-enhanced' mammograms (which is capable of improving the quality of surgical care) and correlating between different types of mammograms. We investigate the performance of our proposed pipeline on the application of enhancing mammograms by means of MRIs and we have shown improvements over the state of the art
    corecore