585 research outputs found

    Automatic quantification of mammary glands on non-contrast X-ray CT by using a novel segmentation approach

    Get PDF
    ABSTRACT This paper describes a brand new automatic segmentation method for quantifying volume and density of mammary gland regions on non-contrast CT images. The proposed method uses two processing steps: (1) breast region localization, and (2) breast region decomposition to accomplish a robust mammary gland segmentation task on CT images. The first step detects two minimum bounding boxes of left and right breast regions, respectively, based on a machine-learning approach that adapts to a large variance of the breast appearances on different age levels. The second step divides the whole breast region in each side into mammary gland, fat tissue, and other regions by using spectral clustering technique that focuses on intra-region similarities of each patient and aims to overcome the image variance caused by different scan-parameters. The whole approach is designed as a simple structure with very minimum number of parameters to gain a superior robustness and computational efficiency for real clinical setting. We applied this approach to a dataset of 300 CT scans, which are sampled with the equal number from 30 to 50 years-old-women. Comparing to human annotations, the proposed approach can measure volume and quantify distributions of the CT numbers of mammary gland regions successfully. The experimental results demonstrated that the proposed approach achieves results consistent with manual annotations. Through our proposed framework, an efficient and effective low cost clinical screening scheme may be easily implemented to predict breast cancer risk, especially on those already acquired scans

    Comparative Analysis of Segment Anything Model and U-Net for Breast Tumor Detection in Ultrasound and Mammography Images

    Full text link
    In this study, the main objective is to develop an algorithm capable of identifying and delineating tumor regions in breast ultrasound (BUS) and mammographic images. The technique employs two advanced deep learning architectures, namely U-Net and pretrained SAM, for tumor segmentation. The U-Net model is specifically designed for medical image segmentation and leverages its deep convolutional neural network framework to extract meaningful features from input images. On the other hand, the pretrained SAM architecture incorporates a mechanism to capture spatial dependencies and generate segmentation results. Evaluation is conducted on a diverse dataset containing annotated tumor regions in BUS and mammographic images, covering both benign and malignant tumors. This dataset enables a comprehensive assessment of the algorithm's performance across different tumor types. Results demonstrate that the U-Net model outperforms the pretrained SAM architecture in accurately identifying and segmenting tumor regions in both BUS and mammographic images. The U-Net exhibits superior performance in challenging cases involving irregular shapes, indistinct boundaries, and high tumor heterogeneity. In contrast, the pretrained SAM architecture exhibits limitations in accurately identifying tumor areas, particularly for malignant tumors and objects with weak boundaries or complex shapes. These findings highlight the importance of selecting appropriate deep learning architectures tailored for medical image segmentation. The U-Net model showcases its potential as a robust and accurate tool for tumor detection, while the pretrained SAM architecture suggests the need for further improvements to enhance segmentation performance

    Preclinical evaluation of nanoparticle enhanced breast cancer diagnosis and radiation therapy

    Get PDF
    Triple negative breast cancer (TNBC) is an aggressive type of cancer which makes up 15-20% of all newly diagnosed cases, lacking the main target molecules for tumor specific treatment. Surgery or systemic therapy by chemotherapy are frequently used in the clinic and combined with radiation therapy to improve locoregional control in breast cancer patients after surgery. With a poor prognosis, there is a clear need to explore new treatment options for TNBC. The aim of the here presented PhD project was to evaluate the feasibility to enhance the biological effect of radiation therapy and increase tumor contrast for diagnosis by applying an in vivo microCT imaging system in combination with barium nanoparticles (BaNPs) in a pH8N8 WAP-T-NP8 mouse model for TNBC. Characterization of the BaNPs revealed strong x-ray attenuation and no toxic effects in different cancer and normal cell lines. Furthermore, irradiation of cancer cells using low energy x-rays in the keV range by a microCT resulted in a significant reduction on colony formation capability. In vitro, this low energy irradiation effect on clonogenic tumor cell survival was enhanced in the presence of BaNPs. Next, a subcutaneous lung cancer mouse model in immunodeficient mice and an orthotopic syngeneic mouse model for breast cancer was applied for further in vivo evaluation. Once the treatment plan was optimized regarding the applied x-ray doses and the frequency of irradiation, low energy radiation therapy within a classical in vivo microCT significantly reduced tumor growth or even resulted in shrinkage of the tumors without visible side effects and weight loss in comparison to untreated controls. However, the intratumoral application of BaNPs was not able to increase the irradiation effect on tumor growth kinetics. This might be in part due to inhomogeneous distribution of BaNPs within the tumor observed by microCT imaging. K-edge subtraction imaging as well as x-ray fluorescence of explanted tumor samples confirmed these findings. To localize the BaNPs in 3D to specific sites within the tumor environment and to detect morphological alterations within the tumor due to irradiation in proximity to BaNPs an ex-vivo imaging based analytic platform was established, utilizing co-registration of microCT and histology data. This imaging approach co-localized BaNPs with CD68 positive phagocytic cells and revealed a non-uniform distribution of the BaNPs within the tumor, however with no signs of locally enhanced radiation effects. Furthermore, antibody functionalized BaNPs were generated for systemic application. Analysis of biodistribution revealed that EpCAM labeled BaNPs did not reach the tumor after intra-venous administration, but accumulated in liver and spleen, demonstrated by a strong CT contrast within these organs. In summary, I showed that low energy radiation therapy by applying an in vivo microCT significantly reduced tumor volumes in comparison to untreated tumors in a syngeneic breast cancer tumor mouse model resembling TNBC. However, BaNPs while enhancing the effectiveness of irradiation on tumor cells in vitro, did not improve the irradiation effect on tumor growth in vivo.2021-07-1

    Differently stained whole slide image registration technique with landmark validation

    Get PDF
    Abstract. One of the most significant features in digital pathology is to compare and fuse successive differently stained tissue sections, also called slides, visually. Doing so, aligning different images to a common frame, ground truth, is required. Current sample scanning tools enable to create images full of informative layers of digitalized tissues, stored with a high resolution into whole slide images. However, there are a limited amount of automatic alignment tools handling large images precisely in acceptable processing time. The idea of this study is to propose a deep learning solution for histopathology image registration. The main focus is on the understanding of landmark validation and the impact of stain augmentation on differently stained histopathology images. Also, the developed registration method is compared with the state-of-the-art algorithms which utilize whole slide images in the field of digital pathology. There are previous studies about histopathology, digital pathology, whole slide imaging and image registration, color staining, data augmentation, and deep learning that are referenced in this study. The goal is to develop a learning-based registration framework specifically for high-resolution histopathology image registration. Different whole slide tissue sample images are used with a resolution of up to 40x magnification. The images are organized into sets of consecutive, differently dyed sections, and the aim is to register the images based on only the visible tissue and ignore the background. Significant structures in the tissue are marked with landmarks. The quality measurements include, for example, the relative target registration error, structural similarity index metric, visual evaluation, landmark-based evaluation, matching points, and image details. These results are comparable and can be used also in the future research and in development of new tools. Moreover, the results are expected to show how the theory and practice are combined in whole slide image registration challenges. DeepHistReg algorithm will be studied to better understand the development of stain color feature augmentation-based image registration tool of this study. Matlab and Aperio ImageScope are the tools to annotate and validate the image, and Python is used to develop the algorithm of this new registration tool. As cancer is globally a serious disease regardless of age or lifestyle, it is important to find ways to develop the systems experts can use while working with patients’ data. There is still a lot to improve in the field of digital pathology and this study is one step toward it.Eri menetelmin värjättyjen virtuaalinäytelasien rekisteröintitekniikka kiintopisteiden validointia hyödyntäen. Tiivistelmä. Yksi tärkeimmistä digitaalipatologian ominaisuuksista on verrata ja fuusioida peräkkäisiä eri menetelmin värjättyjä kudosleikkeitä toisiinsa visuaalisesti. Tällöin keskenään lähes identtiset kuvat kohdistetaan samaan yhteiseen kehykseen, niin sanottuun pohjatotuuteen. Nykyiset näytteiden skannaustyökalut mahdollistavat sellaisten kuvien luonnin, jotka ovat täynnä kerroksittaista tietoa digitalisoiduista näytteistä, tallennettuna erittäin korkean resoluution virtuaalisiin näytelaseihin. Tällä hetkellä on olemassa kuitenkin vain kourallinen automaattisia työkaluja, jotka kykenevät käsittelemään näin valtavia kuvatiedostoja tarkasti hyväksytyin aikarajoin. Tämän työn tarkoituksena on syväoppimista hyväksikäyttäen löytää ratkaisu histopatologisten kuvien rekisteröintiin. Tärkeimpänä osa-alueena on ymmärtää kiintopisteiden validoinnin periaatteet sekä eri väriaineiden augmentoinnin vaikutus. Lisäksi tässä työssä kehitettyä rekisteröintialgoritmia tullaan vertailemaan muihin kirjallisuudessa esitettyihin algoritmeihin, jotka myös hyödyntävät virtuaalinäytelaseja digitaalipatologian saralla. Kirjallisessa osiossa tullaan siteeraamaan aiempia tutkimuksia muun muassa seuraavista aihealueista: histopatologia, digitaalipatologia, virtuaalinäytelasi, kuvantaminen ja rekisteröinti, näytteen värjäys, data-augmentointi sekä syväoppiminen. Tavoitteena on kehittää oppimispohjainen rekisteröintikehys erityisesti korkearesoluutioisille digitalisoiduille histopatologisille kuville. Erilaisissa näytekuvissa tullaan käyttämään jopa 40-kertaista suurennosta. Kuvat kudoksista on järjestetty eri menetelmin värjättyihin peräkkäisiin kuvasarjoihin ja tämän työn päämääränä on rekisteröidä kuvat pohjautuen ainoastaan kudosten näkyviin osuuksiin, jättäen kuvien tausta huomioimatta. Kudosten merkittävimmät rakenteet on merkattu niin sanotuin kiintopistein. Työn laatumittauksina käytetään arvoja, kuten kohteen suhteellinen rekisteröintivirhe (rTRE), rakenteellisen samankaltaisuuindeksin mittari (SSIM), sekä visuaalista arviointia, kiintopisteisiin pohjautuvaa arviointia, yhteensopivuuskohtia, ja kuvatiedoston yksityiskohtia. Nämä arvot ovat verrattavissa myös tulevissa tutkimuksissa ja samaisia arvoja voidaan käyttää uusia työkaluja kehiteltäessä. DeepHistReg metodi toimii pohjana tässä työssä kehitettävälle näytteen värjäyksen parantamiseen pohjautuvalle rekisteröintityökalulle. Matlab ja Aperio ImageScope ovat ohjelmistoja, joita tullaan hyödyntämään tässä työssä kuvien merkitsemiseen ja validointiin. Ohjelmointikielenä käytetään Pythonia. Syöpä on maailmanlaajuisesti vakava sairaus, joka ei katso ikää eikä elämäntyyliä. Siksi on tärkeää löytää uusia keinoja kehittää työkaluja, joita asiantuntijat voivat hyödyntää jokapäiväisessä työssään potilastietojen käsittelyssä. Digitaalipatologian osa-alueella on vielä paljon innovoitavaa ja tämä työ on yksi askel eteenpäin taistelussa syöpäsairauksia vastaan

    Pattern classification approaches for breast cancer identification via MRI: state‐of‐the‐art and vision for the future

    Get PDF
    Mining algorithms for Dynamic Contrast Enhanced Magnetic Resonance Imaging (DCEMRI) of breast tissue are discussed. The algorithms are based on recent advances in multidimensional signal processing and aim to advance current state‐of‐the‐art computer‐aided detection and analysis of breast tumours when these are observed at various states of development. The topics discussed include image feature extraction, information fusion using radiomics, multi‐parametric computer‐aided classification and diagnosis using information fusion of tensorial datasets as well as Clifford algebra based classification approaches and convolutional neural network deep learning methodologies. The discussion also extends to semi‐supervised deep learning and self‐supervised strategies as well as generative adversarial networks and algorithms using generated confrontational learning approaches. In order to address the problem of weakly labelled tumour images, generative adversarial deep learning strategies are considered for the classification of different tumour types. The proposed data fusion approaches provide a novel Artificial Intelligence (AI) based framework for more robust image registration that can potentially advance the early identification of heterogeneous tumour types, even when the associated imaged organs are registered as separate entities embedded in more complex geometric spaces. Finally, the general structure of a high‐dimensional medical imaging analysis platform that is based on multi‐task detection and learning is proposed as a way forward. The proposed algorithm makes use of novel loss functions that form the building blocks for a generated confrontation learning methodology that can be used for tensorial DCE‐MRI. Since some of the approaches discussed are also based on time‐lapse imaging, conclusions on the rate of proliferation of the disease can be made possible. The proposed framework can potentially reduce the costs associated with the interpretation of medical images by providing automated, faster and more consistent diagnosis

    Multiparametric Magnetic Resonance Imaging Artificial Intelligence Pipeline for Oropharyngeal Cancer Radiotherapy Treatment Guidance

    Get PDF
    Oropharyngeal cancer (OPC) is a widespread disease and one of the few domestic cancers that is rising in incidence. Radiographic images are crucial for assessment of OPC and aid in radiotherapy (RT) treatment. However, RT planning with conventional imaging approaches requires operator-dependent tumor segmentation, which is the primary source of treatment error. Further, OPC expresses differential tumor/node mid-RT response (rapid response) rates, resulting in significant differences between planned and delivered RT dose. Finally, clinical outcomes for OPC patients can also be variable, which warrants the investigation of prognostic models. Multiparametric MRI (mpMRI) techniques that incorporate simultaneous anatomical and functional information coupled to artificial intelligence (AI) approaches could improve clinical decision support for OPC by providing immediately actionable clinical rationale for adaptive RT planning. If tumors could be reproducibly segmented, rapid response could be classified, and prognosis could be reliably determined, overall patient outcomes would be optimized to improve the therapeutic index as a function of more risk-adapted RT volumes. Consequently, there is an unmet need for automated and reproducible imaging which can simultaneously segment tumors and provide predictive value for actionable RT adaptation. This dissertation primarily seeks to explore and optimize image processing, tumor segmentation, and patient outcomes in OPC through a combination of advanced imaging techniques and AI algorithms. In the first specific aim of this dissertation, we develop and evaluate mpMRI pre-processing techniques for use in downstream segmentation, response prediction, and outcome prediction pipelines. Various MRI intensity standardization and registration approaches were systematically compared and benchmarked. Moreover, synthetic image algorithms were developed to decrease MRI scan time in an effort to optimize our AI pipelines. We demonstrated that proper intensity standardization and image registration can improve mpMRI quality for use in AI algorithms, and developed a novel method to decrease mpMRI acquisition time. Subsequently, in the second specific aim of this dissertation, we investigated underlying questions regarding the implementation of RT-related auto-segmentation. Firstly, we quantified interobserver variability for an unprecedented large number of observers for various radiotherapy structures in several disease sites (with a particular emphasis on OPC) using a novel crowdsourcing platform. We then trained an AI algorithm on a series of extant matched mpMRI datasets to segment OPC primary tumors. Moreover, we validated and compared our best model\u27s performance to clinical expert observers. We demonstrated that AI-based mpMRI OPC tumor auto-segmentation offers decreased variability and comparable accuracy to clinical experts, and certain mpMRI input channel combinations could further improve performance. Finally, in the third specific aim of this dissertation, we predicted OPC primary tumor mid-therapy (rapid) treatment response and prognostic outcomes. Using co-registered pre-therapy and mid-therapy primary tumor manual segmentations of OPC patients, we generated and characterized treatment sensitive and treatment resistant pre-RT sub-volumes. These sub-volumes were used to train an AI algorithm to predict individual voxel-wise treatment resistance. Additionally, we developed an AI algorithm to predict OPC patient progression free survival using pre-therapy imaging from an international data science competition (ranking 1st place), and then translated these approaches to mpMRI data. We demonstrated AI models could be used to predict rapid response and prognostic outcomes using pre-therapy imaging, which could help guide treatment adaptation, though further work is needed. In summary, the completion of these aims facilitates the development of an image-guided fully automated OPC clinical decision support tool. The resultant deliverables from this project will positively impact patients by enabling optimized therapeutic interventions in OPC. Future work should consider investigating additional imaging timepoints, imaging modalities, uncertainty quantification, perceptual and ethical considerations, and prospective studies for eventual clinical implementation. A dynamic version of this dissertation is publicly available and assigned a digital object identifier through Figshare (doi: 10.6084/m9.figshare.22141871)

    Automatic Breast Density Classification on Tomosynthesis Images

    Get PDF
    Breast cancer (BC) is the type of cancer that most greatly affects women globally hence its early detection is essential to guarantee an effective treatment. Although digital mammography (DM) is the main method of BC detection, it has low sensitivity with about 30% of positive cases undetected due to the superimposition of breast tissue when crossed by the X-ray beam. Digital breast tomosynthesis (DBT) does not share this limi tation, allowing the visualization of individual breast slices due to its image acquisition system. Consecutively, DBT was the object of this study as a means of determining one of the main risk factors for BC: breast density (BD). This thesis was aimed at developing an algorithm that, taking advantage of the 3D nature of DBT images, automatically clas sifies them in terms of BD. Thus, a quantitative, objective and reproducible classification was obtained, which will contribute to ascertain the risk of BC. The algorithm was developed in MATLAB and later transferred to a user interface that was compiled into an executable application. Using 350 images from the VICTRE database for the first classification phase – group 1 (ACR1+ACR2) versus group 2 (ACR3+ACR4), the highest AUC value of 0,9797 was obtained. In the classification within groups 1 and 2, the AUC obtained was 0,7461 and 0,6736, respectively. The algorithm attained an accuracy of 82% for these images. Sixteen exams provided by Hospital da Luz were also evaluated, with an overall accuracy of 62,5%. Therefore, a user-friendly and intuitive application was created that prioritizes the use of DBT as a diagnostic method and allows an objective classification of BD. This study is a first step towards preparing medical institutions for the compulsoriness of assessing BD, at a time when BC is still a very present pathology that shortens the lives of thousands of people

    Review of Different Methods of Abnormal Mass Detection in Digital Mammograms

    Get PDF
    Various images from massive image databases extract inherent, implanted information or different examples explicitly found in the images. These images may help the community in initial self-screening breast cancer, and primary health care can introduce this method to the community. This study aimed to review the different methods of abnormal mass detection in digital mammograms. One of best methods for the detection of breast malignancy and discovery at a nascent stage is digital mammography. Some of the mammograms with excellent images have a high intensity of resolution that enables preparing images with high computations. The fact that medical images are so common on computers is one of the main things that helps radiologists make diagnoses. Image preprocessing highlights the portion after extraction and arrangement in computerized mammograms. Moreover, the future scope of examination for paving could be the way for a top invention in computer-aided diagnosis (CAD) for mammograms in the coming years. This also distinguished CAD that helped identify strategies for mass widely covered in the study work. However, the identification methods for structural deviation in mammograms are complicated in real-life scenarios. These methods will benefit the public health program if they can be introduced to primary health care's public health screening system. The decision should be made as to which type of technology fits the level of the primary health care system
    corecore