7 research outputs found

    Photoacoustic Imaging, Feature Extraction, and Machine Learning Implementation for Ovarian and Colorectal Cancer Diagnosis

    Get PDF
    Among all cancers related to women’s reproductive systems, ovarian cancer has the highest mortality rate. Pelvic examination, transvaginal ultrasound (TVUS), and blood testing for cancer antigen 125 (CA-125), are the conventional screening tools for ovarian cancer, but they offer very low specificity. Other tools, such as magnetic resonance imaging (MRI), computed tomography (CT), and positron emission tomography (PET), also have limitations in detecting small lesions. In the USA, considering men and women separately, colorectal cancer is the third most common cause of death related to cancer; for men and women combined, it is the second leading cause of cancer deaths. It is estimated that in 2021, 52,980 deaths due to this cancer will be recorded. The common screening tools for colorectal cancer diagnosis include colonoscopy, biopsy, endoscopic ultrasound (EUS), optical imaging, pelvic MRI, CT, and PET, which all have specific limitations. In this dissertation, we first discuss in-vivo ovarian cancer diagnosis using our coregistered photoacoustic tomography and ultrasound (PAT/US) system. The application of this system is also explored in colorectal cancer diagnosis ex-vivo. Finally, we discuss the capability of our photoacoustic microscopy (PAM) system, complemented by machine learning algorithms, in distinguishing cancerous rectums from normal ones. The dissertation starts with discussing our low-cost phantom construction procedure for pre-clinical experiments and quantitative PAT. This phantom has ultrasound and photoacoustic properties similar to those of human tissue, making it a good candidate for photoacoustic imaging experiments. In-vivo ovarian cancer diagnosis using our PAT/US system is then discussed. We demonstrate extraction of spectral, image, and functional features from our PAT data. These features are then used to distinguish malignant (n=12) from benign ovaries (n=27). An AUC of 0.93 is achieved using our developed SVM classifier. We then explain a sliding multi-pixel method to mitigate the effect of noise on the estimation of functional features from PAT data. This method is tested on 13 malignant and 36 benign ovaries. After that, we demonstrate our two-step optimization method for unmixing the optical absorption (μa) of the tissue from the system response (C) and Grüneisen parameter (Γ) in quantitative PAT (QPAT). Using this method, we calculate the absorption coefficient and functional parameters of five blood tubes, with sO2 values ranging from 24.9% to 97.6%. We then demonstrate the capability of our PAT/US system in monitoring colorectal cancer treatment as well as classifying 13 malignant and 17 normal colon samples. Using PAT features to distinguish these two types of samples (malignant and normal colons), our classifier can achieve an AUC of 0.93. After that, we demonstrate the capability of our coregistered photoacoustic microscopy and ultrasound (PAM/US) system in distinguishing normal from malignant colorectal tissue. It is shown that a convolutional neural network (CNN) significantly outperforms the generalized regression model (GLM) in distinguishing these two types of lesions

    Ultrasound-enhanced Unet model for quantitative photoacoustic tomography of ovarian lesions

    Get PDF
    Quantitative photoacoustic tomography (QPAT) is a valuable tool in characterizing ovarian lesions for accurate diagnosis. However, accurately reconstructing a lesion\u27s optical absorption distributions from photoacoustic signals measured with multiple wavelengths is challenging because it involves an ill-posed inverse problem with three unknowns: the Grüneisen paramete

    Rectal cancer treatment management: Deep-learning neural network based on photoacoustic microscopy image outperforms histogram-feature-based classification

    Get PDF
    We have developed a novel photoacoustic microscopy/ultrasound (PAM/US) endoscope to image post-treatment rectal cancer for surgical management of residual tumor after radiation and chemotherapy. Paired with a deep-learning convolutional neural network (CNN), the PAM images accurately differentiated pathological complete responders (pCR) from incomplete responders. However, the role of CNNs compared with traditional histogram-feature based classifiers needs further exploration. In this work, we compare the performance of the CNN models to generalized linear models (GLM) across 2

    Photoacoustic tomography reconstruction using lag-based delay multiply and sum with a coherence factor improves in vivo ovarian cancer diagnosis

    Get PDF
    Ovarian cancer is the fifth most common cause of death due to cancer, and it is the deadliest of all gynecological cancers. Diagnosing ovarian cancer via conventional photoacoustic delay-and-sum beamforming (DAS) presents several challenges, such as poor image resolution and low lesion to background tissue contrast. To address these concerns, we propose an improved beamformer named lag-based delay multiply and sum combined with coherence factor (DMAS-LAG-CF). Simulations and phantom experiments demonstrate that compared with the conventional DAS, the proposed algorithm can provide 1.39 times better resolution and 10.77 dB higher contrast. For patient data, similar performance on contrast ratios has been observed. However, since the diagnostic accuracy between cancer and benign/normal groups is a significant measure, we have extracted photoacoustic histogram features of mean, kurtosis and skewness. DMAS-LAG-CF can improve cancer diagnosis with an AUC of 0.91 for distinguishing malignant vs. benign ovarian lesions when mean and skewness are used as features
    corecore