191 research outputs found

    Comparative Analysis of Segment Anything Model and U-Net for Breast Tumor Detection in Ultrasound and Mammography Images

    Full text link
    In this study, the main objective is to develop an algorithm capable of identifying and delineating tumor regions in breast ultrasound (BUS) and mammographic images. The technique employs two advanced deep learning architectures, namely U-Net and pretrained SAM, for tumor segmentation. The U-Net model is specifically designed for medical image segmentation and leverages its deep convolutional neural network framework to extract meaningful features from input images. On the other hand, the pretrained SAM architecture incorporates a mechanism to capture spatial dependencies and generate segmentation results. Evaluation is conducted on a diverse dataset containing annotated tumor regions in BUS and mammographic images, covering both benign and malignant tumors. This dataset enables a comprehensive assessment of the algorithm's performance across different tumor types. Results demonstrate that the U-Net model outperforms the pretrained SAM architecture in accurately identifying and segmenting tumor regions in both BUS and mammographic images. The U-Net exhibits superior performance in challenging cases involving irregular shapes, indistinct boundaries, and high tumor heterogeneity. In contrast, the pretrained SAM architecture exhibits limitations in accurately identifying tumor areas, particularly for malignant tumors and objects with weak boundaries or complex shapes. These findings highlight the importance of selecting appropriate deep learning architectures tailored for medical image segmentation. The U-Net model showcases its potential as a robust and accurate tool for tumor detection, while the pretrained SAM architecture suggests the need for further improvements to enhance segmentation performance

    Development and Validation of Mechatronic Systems for Image-Guided Needle Interventions and Point-of-Care Breast Cancer Screening with Ultrasound (2D and 3D) and Positron Emission Mammography

    Get PDF
    The successful intervention of breast cancer relies on effective early detection and definitive diagnosis. While conventional screening mammography has substantially reduced breast cancer-related mortalities, substantial challenges persist in women with dense breasts. Additionally, complex interrelated risk factors and healthcare disparities contribute to breast cancer-related inequities, which restrict accessibility, impose cost constraints, and reduce inclusivity to high-quality healthcare. These limitations predominantly stem from the inadequate sensitivity and clinical utility of currently available approaches in increased-risk populations, including those with dense breasts, underserved and vulnerable populations. This PhD dissertation aims to describe the development and validation of alternative, cost-effective, robust, and high-resolution systems for point-of-care (POC) breast cancer screening and image-guided needle interventions. Specifically, 2D and 3D ultrasound (US) and positron emission mammography (PEM) were employed to improve detection, independent of breast density, in conjunction with mechatronic and automated approaches for accurate image acquisition and precise interventional workflow. First, a mechatronic guidance system for US-guided biopsy under high-resolution PEM localization was developed to improve spatial sampling of early-stage breast cancers. Validation and phantom studies showed accurate needle positioning and 3D spatial sampling under simulated PEM localization. Subsequently, a whole-breast spatially-tracked 3DUS system for point-of-care screening was developed, optimized, and validated within a clinically-relevant workspace and healthy volunteer studies. To improve robust image acquisition and adaptability to diverse patient populations, an alternative, cost-effective, portable, and patient-dedicated 3D automated breast (AB) US system for point-of-care screening was developed. Validation showed accurate geometric reconstruction, feasible clinical workflow, and proof-of-concept utility across healthy volunteers and acquisition conditions. Lastly, an orthogonal acquisition and 3D complementary breast (CB) US generation approach were described and experimentally validated to improve spatial resolution uniformity by recovering poor out-of-plane resolution. These systems developed and described throughout this dissertation show promise as alternative, cost-effective, robust, and high-resolution approaches for improving early detection and definitive diagnosis. Consequently, these contributions may advance breast cancer-related equities and improve outcomes in increased-risk populations and limited-resource settings

    Generative Adversarial Network (GAN) for Medical Image Synthesis and Augmentation

    Get PDF
    Medical image processing aided by artificial intelligence (AI) and machine learning (ML) significantly improves medical diagnosis and decision making. However, the difficulty to access well-annotated medical images becomes one of the main constraints on further improving this technology. Generative adversarial network (GAN) is a DNN framework for data synthetization, which provides a practical solution for medical image augmentation and translation. In this study, we first perform a quantitative survey on the published studies on GAN for medical image processing since 2017. Then a novel adaptive cycle-consistent adversarial network (Ad CycleGAN) is proposed. We respectively use a malaria blood cell dataset (19,578 images) and a COVID-19 chest X-ray dataset (2,347 images) to test the new Ad CycleGAN. The quantitative metrics include mean squared error (MSE), root mean squared error (RMSE), peak signal-to-noise ratio (PSNR), universal image quality index (UIQI), spatial correlation coefficient (SCC), spectral angle mapper (SAM), visual information fidelity (VIF), Frechet inception distance (FID), and the classification accuracy of the synthetic images. The CycleGAN and variant autoencoder (VAE) are also implemented and evaluated as comparison. The experiment results on malaria blood cell images indicate that the Ad CycleGAN generates more valid images compared to CycleGAN or VAE. The synthetic images by Ad CycleGAN or CycleGAN have better quality than those by VAE. The synthetic images by Ad CycleGAN have the highest accuracy of 99.61%. In the experiment on COVID-19 chest X-ray, the synthetic images by Ad CycleGAN or CycleGAN have higher quality than those generated by variant autoencoder (VAE). However, the synthetic images generated through the homogenous image augmentation process have better quality than those synthesized through the image translation process. The synthetic images by Ad CycleGAN have higher accuracy of 95.31% compared to the accuracy of the images by CycleGAN of 93.75%. In conclusion, the proposed Ad CycleGAN provides a new path to synthesize medical images with desired diagnostic or pathological patterns. It is considered a new approach of conditional GAN with effective control power upon the synthetic image domain. The findings offer a new path to improve the deep neural network performance in medical image processing

    Experimental Investigation for Detecting Mitotic Cells in Medical Image using an Automated Algorithm

    Get PDF
    Cancer of the breast is a malignant tumour that originates in the cells of the breast tissue. It is by far the most common kind of cancer found in females around the world, with a projected 2.3 million new cases will be discovered in the year 2020 alone. It is projected that one in eight women will be diagnosed with breast cancer at some point in their life, despite the fact that breast cancer can also occur in men. Breast cancer is a complex condition that can arise from a diverse set of factors, express itself in a variety of ways, and can be treated in a variety of ways. Ductal carcinoma in situ, invasive ductal carcinoma, and invasive lobular carcinoma are all different subtypes. Both the available treatment options and the expected outcome of breast cancer are very variable depending on the particular subtype of the illness. Breast cancer risk factors include drinking alcohol and not getting enough exercise, as well as getting older, having a family history of the disease, having genetic mutations, being exposed to estrogens, and having a family history of the disease. There is not always a connection between having risk factors and developing breast cancer, despite the fact that there can be a link between the two. The prognosis and treatment options for breast cancer are highly dependent on the stage of the disease at the time of diagnosis. During staging, the extent to which the cancer has spread throughout the body and how far it has progressed are both measured. The TNM system, the IAFCM system, the ACM system, and the MPIG system are just few of the staging systems that are used to classify breast cancer. These staging systems consider not only the size of the tumor but also whether or not lymph nodes are involved and whether or not distant metastases are present. The severity of breast cancer symptoms can vary widely, depending not only on the subtype of the disease but also on how far along it has progressed. Alterations in the size or shape of the breast, discharge from the nipple, and alterations in the skin of the breast (such as redness or dimpling) are all common indications. On the other hand, not all cases of breast cancer present themselves in a visible manner, and mammography and other forms of routine screening may be able to detect some of these cases. Options for treating breast cancer vary depending on the patient's condition and the stage of the disease, as well as the patient's overall health and their preferences towards therapy. Common examples of medical interventions include surgery, radiotherapy, chemotherapy, hormone therapy, and targeted therapy. Other examples include. In certain cases, it may be appropriate to participate in more than one form of treatment

    Towards a trustworthy data-driven clinical decision support system: breast cancer use-case

    Get PDF
    Artificial Intelligence (AI) research has emerged as a powerful tool for health-related applications. With the increasing shortage of radiologists and oncologists around the world, developing an end-to-end AI-based Clinical Decision Support (CDS) system for fatal disease diagnosis and survivability prediction can have a significant impact on healthcare professionals as well as patients. Such a system uses machine learning algorithms to analyze medical images and clinical data to detect cancer, estimate its survivability and aid in treatment planning. We can break the CDS system down into three main components: the Computer-Aided Diagnosis (CAD), the Computer-Aided Prognosis subsystem (CAP) and the Computer-Aided Treatment Planning (CATP). The lack of trustworthiness of these subsystems is still considered a challenge that needs to be addressed in order to increase their adoption and usefulness in real-world applications. In this thesis, using the breast cancer use case, we propose new methods and frameworks to address existing challenges and research gaps in different components of the system to pave the way toward its usage in clinical practice. In cancer CAD systems, the first and most important step is to analyze medical images to identify potential tumors in a specific organ. In dense prediction problems like mass segmentation, preserving the input image resolution plays a crucial role in achieving good performance. However, this resolution is often reduced in current Convolution Neural Networks (CNN) that are commonly repurposed for this task. In Chapter 3, we propose a double-dilated convolution module in order to preserve spatial resolution while having a large receptive field. The proposed module is applied to the tumor segmentation task in breast cancer mammograms as a proof-of-concept. To address the pixel-level class imbalance problem in mammogram screenings, different loss functions (i.e., binary crossentropy, weighted cross-entropy, dice loss, and Tversky loss) are evaluated. We address the lack of transparency in current medical image segmentation models by employing and quantitatively evaluating different explainability methods (i.e., Grad-CAM, Occlusion Sensitivity, and Activation visualization) for the image segmentation task. Our experimental analysis shows the effectiveness of the proposed model in increasing the similarity score and decreasing the miss-detection rate. [...

    WiFi-Based Human Activity Recognition Using Attention-Based BiLSTM

    Get PDF
    Recently, significant efforts have been made to explore human activity recognition (HAR) techniques that use information gathered by existing indoor wireless infrastructures through WiFi signals without demanding the monitored subject to carry a dedicated device. The key intuition is that different activities introduce different multi-paths in WiFi signals and generate different patterns in the time series of channel state information (CSI). In this paper, we propose and evaluate a full pipeline for a CSI-based human activity recognition framework for 12 activities in three different spatial environments using two deep learning models: ABiLSTM and CNN-ABiLSTM. Evaluation experiments have demonstrated that the proposed models outperform state-of-the-art models. Also, the experiments show that the proposed models can be applied to other environments with different configurations, albeit with some caveats. The proposed ABiLSTM model achieves an overall accuracy of 94.03%, 91.96%, and 92.59% across the 3 target environments. While the proposed CNN-ABiLSTM model reaches an accuracy of 98.54%, 94.25% and 95.09% across those same environments

    Developing Novel Computer Aided Diagnosis Schemes for Improved Classification of Mammography Detected Masses

    Get PDF
    Mammography imaging is a population-based breast cancer screening tool that has greatly aided in the decrease in breast cancer mortality over time. Although mammography is the most frequently employed breast imaging modality, its performance is often unsatisfactory with low sensitivity and high false positive rates. This is due to the fact that reading and interpreting mammography images remains difficult due to the heterogeneity of breast tumors and dense overlapping fibroglandular tissue. To help overcome these clinical challenges, researchers have made great efforts to develop computer-aided detection and/or diagnosis (CAD) schemes to provide radiologists with decision-making support tools. In this dissertation, I investigate several novel methods for improving the performance of a CAD system in distinguishing between malignant and benign masses. The first study, we test the hypothesis that handcrafted radiomics features and deep learning features contain complementary information, therefore the fusion of these two types of features will increase the feature representation of each mass and improve the performance of CAD system in distinguishing malignant and benign masses. Regions of interest (ROI) surrounding suspicious masses are extracted and two types of features are computed. The first set consists of 40 radiomic features and the second set includes deep learning (DL) features computed from a pretrained VGG16 network. DL features are extracted from two pseudo color image sets, producing a total of three feature vectors after feature extraction, namely: handcrafted, DL-stacked, DL-pseudo. Linear support vector machines (SVM) are trained using each feature set alone and in combinations. Results show that the fusion CAD system significantly outperforms the systems using either feature type alone (AUC=0.756±0.042 p<0.05). This study demonstrates that both handcrafted and DL futures contain useful complementary information and that fusion of these two types of features increases the CAD classification performance. In the second study, we expand upon our first study and develop a novel CAD framework that fuses information extracted from ipsilateral views of bilateral mammograms using both DL and radiomics feature extraction methods. Each case in this study is represented by four images which includes the craniocaudal (CC) and mediolateral oblique (MLO) view of left and right breast. First, we extract matching ROIs from each of the four views using an ipsilateral matching and bilateral registration scheme to ensure masses are appropriately matched. Next, the handcrafted radiomics features and VGG16 model-generated features are extracted from each ROI resulting in eight feature vectors. Then, after reducing feature dimensionality and quantifying the bilateral asymmetry, we test four fusion methods. Results show that multi-view CAD systems significantly outperform single-view systems (AUC = 0.876±0.031 vs AUC = 0.817±0.026 for CC view and 0.792±0.026 for MLO view, p<0.001). The study demonstrates that the shift from single-view CAD to four-view CAD and the inclusion of both deep transfer learning and radiomics features increases the feature representation of the mass thus improves CAD performance in distinguishing between malignant and benign breast lesions. In the third study, we build upon the first and second studies and investigate the effects of pseudo color image generation in classifying suspicious mammography detected breast lesions as malignant or benign using deep transfer learning in a multi-view CAD scheme. Seven pseudo color image sets are created through a combination of the original grayscale image, a histogram equalized image, a bilaterally filtered image, and a segmented mass image. Using the multi-view CAD framework developed in the previous study, we observe that the two pseudo-color sets created using a segmented mass in one of the three image channels performed significantly better than all other pseudo-color sets (AUC=0.882, p<0.05 for all comparisons and AUC=0.889, p<0.05 for all comparisons). The results of this study support our hypothesis that pseudo color images generated with a segmented mass optimize the mammogram image feature representation by providing increased complementary information to the CADx scheme which results in an increase in the performance in classifying suspicious mammography detected breast lesions as malignant or benign. In summary, each of the studies presented in this dissertation aim to increase the accuracy of a CAD system in classifying suspicious mammography detected masses. Each of these studies takes a novel approach to increase the feature representation of the mass that needs to be classified. The results of each study demonstrate the potential utility of these CAD schemes as an aid to radiologists in the clinical workflow

    Deep Learning in Breast Cancer Imaging: A Decade of Progress and Future Directions

    Full text link
    Breast cancer has reached the highest incidence rate worldwide among all malignancies since 2020. Breast imaging plays a significant role in early diagnosis and intervention to improve the outcome of breast cancer patients. In the past decade, deep learning has shown remarkable progress in breast cancer imaging analysis, holding great promise in interpreting the rich information and complex context of breast imaging modalities. Considering the rapid improvement in the deep learning technology and the increasing severity of breast cancer, it is critical to summarize past progress and identify future challenges to be addressed. In this paper, we provide an extensive survey of deep learning-based breast cancer imaging research, covering studies on mammogram, ultrasound, magnetic resonance imaging, and digital pathology images over the past decade. The major deep learning methods, publicly available datasets, and applications on imaging-based screening, diagnosis, treatment response prediction, and prognosis are described in detail. Drawn from the findings of this survey, we present a comprehensive discussion of the challenges and potential avenues for future research in deep learning-based breast cancer imaging.Comment: Survey, 41 page

    Intraoperative Quantification of Bone Perfusion in Lower Extremity Injury Surgery

    Get PDF
    Orthopaedic surgery is one of the most common surgical categories. In particular, lower extremity injuries sustained from trauma can be complex and life-threatening injuries that are addressed through orthopaedic trauma surgery. Timely evaluation and surgical debridement following lower extremity injury is essential, because devitalized bones and tissues will result in high surgical site infection rates. However, the current clinical judgment of what constitutes “devitalized tissue” is subjective and dependent on surgeon experience, so it is necessary to develop imaging techniques for guiding surgical debridement, in order to control infection rates and to improve patient outcome. In this thesis work, computational models of fluorescence-guided debridement in lower extremity injury surgery will be developed, by quantifying bone perfusion intraoperatively using Dynamic contrast-enhanced fluorescence imaging (DCE-FI) system. Perfusion is an important factor of tissue viability, and therefore quantifying perfusion is essential for fluorescence-guided debridement. In Chapters 3-7 of this thesis, we explore the performance of DCE-FI in quantifying perfusion from benchtop to translation: We proposed a modified fluorescent microsphere quantification technique using cryomacrotome in animal model. This technique can measure bone perfusion in periosteal and endosteal separately, and therefore to validate bone perfusion measurements obtained by DCE-FI; We developed pre-clinical rodent contaminated fracture model to correlate DCE-FI with infection risk, and compare with multi-modality scanning; Furthermore in clinical studies, we investigated first-pass kinetic parameters of DCE-FI and arterial input functions for characterization of perfusion changes during lower limb amputation surgery; We conducted the first in-human use of dynamic contrast-enhanced texture analysis for orthopaedic trauma classification, suggesting that spatiotemporal features from DCE-FI can classify bone perfusion intraoperatively with high accuracy and sensitivity; We established clinical machine learning infection risk predictive model on open fracture surgery, where pixel-scaled prediction on infection risk will be accomplished. In conclusion, pharmacokinetic and spatiotemporal patterns of dynamic contrast-enhanced imaging show great potential for quantifying bone perfusion and prognosing bone infection. The thesis work will decrease surgical site infection risk and improve successful rates of lower extremity injury surgery
    corecore