32 research outputs found

    Domain generalization for prostate segmentation in transrectal ultrasound images: A multi-center study

    Get PDF
    Prostate biopsy and image-guided treatment procedures are often performed under the guidance of ultrasound fused with magnetic resonance images (MRI). Accurate image fusion relies on accurate segmentation of the prostate on ultrasound images. Yet, the reduced signal-to-noise ratio and artifacts (e.g., speckle and shadowing) in ultrasound images limit the performance of automated prostate segmentation techniques and generalizing these methods to new image domains is inherently difficult. In this study, we address these challenges by introducing a novel 2.5D deep neural network for prostate segmentation on ultrasound images. Our approach addresses the limitations of transfer learning and finetuning methods (i.e., drop in performance on the original training data when the model weights are updated) by combining a supervised domain adaptation technique and a knowledge distillation loss. The knowledge distillation loss allows the preservation of previously learned knowledge and reduces the performance drop after model finetuning on new datasets. Furthermore, our approach relies on an attention module that considers model feature positioning information to improve the segmentation accuracy. We trained our model on 764 subjects from one institution and finetuned our model using only ten subjects from subsequent institutions. We analyzed the performance of our method on three large datasets encompassing 2067 subjects from three different institutions. Our method achieved an average Dice Similarity Coefficient (Dice) of 94.0±0.03 and Hausdorff Distance (HD95) of 2.28 mm in an independent set of subjects from the first institution. Moreover, our model generalized well in the studies from the other two institutions (Dice: 91.0±0.03; HD95: 3.7 mm and Dice: 82.0±0.03; HD95: 7.1 mm). We introduced an approach that successfully segmented the prostate on ultrasound images in a multi-center study, suggesting its clinical potential to facilitate the accurate fusion of ultrasound and MRI images to drive biopsy and image-guided treatments

    A review of artificial intelligence in prostate cancer detection on imaging

    Get PDF
    A multitude of studies have explored the role of artificial intelligence (AI) in providing diagnostic support to radiologists, pathologists, and urologists in prostate cancer detection, risk-stratification, and management. This review provides a comprehensive overview of relevant literature regarding the use of AI models in (1) detecting prostate cancer on radiology images (magnetic resonance and ultrasound imaging), (2) detecting prostate cancer on histopathology images of prostate biopsy tissue, and (3) assisting in supporting tasks for prostate cancer detection (prostate gland segmentation, MRI-histopathology registration, MRI-ultrasound registration). We discuss both the potential of these AI models to assist in the clinical workflow of prostate cancer diagnosis, as well as the current limitations including variability in training data sets, algorithms, and evaluation criteria. We also discuss ongoing challenges and what is needed to bridge the gap between academic research on AI for prostate cancer and commercial solutions that improve routine clinical care

    Robust Resolution-Enhanced Prostate Segmentation in Magnetic Resonance and Ultrasound Images through Convolutional Neural Networks

    Full text link
    [EN] Prostate segmentations are required for an ever-increasing number of medical applications, such as image-based lesion detection, fusion-guided biopsy and focal therapies. However, obtaining accurate segmentations is laborious, requires expertise and, even then, the inter-observer variability remains high. In this paper, a robust, accurate and generalizable model for Magnetic Resonance (MR) and three-dimensional (3D) Ultrasound (US) prostate image segmentation is proposed. It uses a densenet-resnet-based Convolutional Neural Network (CNN) combined with techniques such as deep supervision, checkpoint ensembling and Neural Resolution Enhancement. The MR prostate segmentation model was trained with five challenging and heterogeneous MR prostate datasets (and two US datasets), with segmentations from many different experts with varying segmentation criteria. The model achieves a consistently strong performance in all datasets independently (mean Dice Similarity Coefficient -DSC- above 0.91 for all datasets except for one), outperforming the inter-expert variability significantly in MR (mean DSC of 0.9099 vs. 0.8794). When evaluated on the publicly available Promise12 challenge dataset, it attains a similar performance to the best entries. In summary, the model has the potential of having a significant impact on current prostate procedures, undercutting, and even eliminating, the need of manual segmentations through improvements in terms of robustness, generalizability and output resolutionThis work has been partially supported by a doctoral grant of the Spanish Ministry of Innovation and Science, with reference FPU17/01993Pellicer-Valero, OJ.; GonzĂĄlez-PĂ©rez, V.; Casanova RamĂłn-Borja, JL.; MartĂ­n GarcĂ­a, I.; Barrios Benito, M.; Pelechano GĂłmez, P.; Rubio-Briones, J.... (2021). Robust Resolution-Enhanced Prostate Segmentation in Magnetic Resonance and Ultrasound Images through Convolutional Neural Networks. Applied Sciences. 11(2):1-17. https://doi.org/10.3390/app11020844S117112Marra, G., Ploussard, G., Futterer, J., & Valerio, M. (2019). Controversies in MR targeted biopsy: alone or combined, cognitive versus software-based fusion, transrectal versus transperineal approach? World Journal of Urology, 37(2), 277-287. doi:10.1007/s00345-018-02622-5Ahdoot, M., Lebastchi, A. H., Turkbey, B., Wood, B., & Pinto, P. A. (2019). Contemporary treatments in prostate cancer focal therapy. Current Opinion in Oncology, 31(3), 200-206. doi:10.1097/cco.0000000000000515Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2017). ImageNet classification with deep convolutional neural networks. Communications of the ACM, 60(6), 84-90. doi:10.1145/3065386Allen, P. D., Graham, J., Williamson, D. C., & Hutchinson, C. E. (s. f.). Differential Segmentation of the Prostate in MR Images Using Combined 3D Shape Modelling and Voxel Classification. 3rd IEEE International Symposium on Biomedical Imaging: Macro to Nano, 2006. doi:10.1109/isbi.2006.1624940Freedman, D., Radke, R. J., Tao Zhang, Yongwon Jeong, Lovelock, D. M., & Chen, G. T. Y. (2005). Model-based segmentation of medical imagery by matching distributions. IEEE Transactions on Medical Imaging, 24(3), 281-292. doi:10.1109/tmi.2004.841228Klein, S., van der Heide, U. A., Lips, I. M., van Vulpen, M., Staring, M., & Pluim, J. P. W. (2008). Automatic segmentation of the prostate in 3D MR images by atlas matching using localized mutual information. Medical Physics, 35(4), 1407-1417. doi:10.1118/1.2842076Ronneberger, O., Fischer, P., & Brox, T. (2015). U-Net: Convolutional Networks for Biomedical Image Segmentation. Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015, 234-241. doi:10.1007/978-3-319-24574-4_28He, K., Gkioxari, G., Dollar, P., & Girshick, R. (2017). Mask R-CNN. 2017 IEEE International Conference on Computer Vision (ICCV). doi:10.1109/iccv.2017.322Shelhamer, E., Long, J., & Darrell, T. (2017). Fully Convolutional Networks for Semantic Segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39(4), 640-651. doi:10.1109/tpami.2016.2572683He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep Residual Learning for Image Recognition. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). doi:10.1109/cvpr.2016.90Milletari, F., Navab, N., & Ahmadi, S.-A. (2016). V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation. 2016 Fourth International Conference on 3D Vision (3DV). doi:10.1109/3dv.2016.79Zhu, Q., Du, B., Turkbey, B., Choyke, P. L., & Yan, P. (2017). Deeply-supervised CNN for prostate segmentation. 2017 International Joint Conference on Neural Networks (IJCNN). doi:10.1109/ijcnn.2017.7965852To, M. N. N., Vu, D. Q., Turkbey, B., Choyke, P. L., & Kwak, J. T. (2018). Deep dense multi-path neural network for prostate segmentation in magnetic resonance imaging. International Journal of Computer Assisted Radiology and Surgery, 13(11), 1687-1696. doi:10.1007/s11548-018-1841-4Huang, G., Liu, Z., Van Der Maaten, L., & Weinberger, K. Q. (2017). Densely Connected Convolutional Networks. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). doi:10.1109/cvpr.2017.243Zhu, Y., Wei, R., Gao, G., Ding, L., Zhang, X., Wang, X., & Zhang, J. (2018). Fully automatic segmentation on prostate MR images based on cascaded fully convolution network. Journal of Magnetic Resonance Imaging, 49(4), 1149-1156. doi:10.1002/jmri.26337Wang, Y., Ni, D., Dou, H., Hu, X., Zhu, L., Yang, X., 
 Wang, T. (2019). Deep Attentive Features for Prostate Segmentation in 3D Transrectal Ultrasound. IEEE Transactions on Medical Imaging, 38(12), 2768-2778. doi:10.1109/tmi.2019.2913184LemaĂźtre, G., MartĂ­, R., Freixenet, J., Vilanova, J. C., Walker, P. M., & Meriaudeau, F. (2015). Computer-Aided Detection and diagnosis for prostate cancer based on mono and multi-parametric MRI: A review. Computers in Biology and Medicine, 60, 8-31. doi:10.1016/j.compbiomed.2015.02.009Litjens, G., Toth, R., van de Ven, W., Hoeks, C., Kerkstra, S., van Ginneken, B., 
 Madabhushi, A. (2014). Evaluation of prostate segmentation algorithms for MRI: The PROMISE12 challenge. Medical Image Analysis, 18(2), 359-373. doi:10.1016/j.media.2013.12.002Zhu, Q., Du, B., & Yan, P. (2020). Boundary-Weighted Domain Adaptive Neural Network for Prostate MR Image Segmentation. IEEE Transactions on Medical Imaging, 39(3), 753-763. doi:10.1109/tmi.2019.2935018He, K., Zhang, X., Ren, S., & Sun, J. (2015). Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification. 2015 IEEE International Conference on Computer Vision (ICCV). doi:10.1109/iccv.2015.123Pan, S. J., & Yang, Q. (2010). A Survey on Transfer Learning. IEEE Transactions on Knowledge and Data Engineering, 22(10), 1345-1359. doi:10.1109/tkde.2009.191Smith, L. N. (2017). Cyclical Learning Rates for Training Neural Networks. 2017 IEEE Winter Conference on Applications of Computer Vision (WACV). doi:10.1109/wacv.2017.58Abraham, N., & Khan, N. M. (2019). A Novel Focal Tversky Loss Function With Improved Attention U-Net for Lesion Segmentation. 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019). doi:10.1109/isbi.2019.8759329Lei, Y., Tian, S., He, X., Wang, T., Wang, B., Patel, P., 
 Yang, X. (2019). Ultrasound prostate segmentation based on multidirectional deeply supervised V‐Net. Medical Physics, 46(7), 3194-3206. doi:10.1002/mp.13577Orlando, N., Gillies, D. J., Gyacskov, I., Romagnoli, C., D’Souza, D., & Fenster, A. (2020). Automatic prostate segmentation using deep learning on clinically diverse 3D transrectal ultrasound images. Medical Physics, 47(6), 2413-2426. doi:10.1002/mp.14134Karimi, D., Zeng, Q., Mathur, P., Avinash, A., Mahdavi, S., Spadinger, I., 
 Salcudean, S. E. (2019). Accurate and robust deep learning-based segmentation of the prostate clinical target volume in ultrasound images. Medical Image Analysis, 57, 186-196. doi:10.1016/j.media.2019.07.005PROMISE12 Resultshttps://promise12.grand-challenge.org/Isensee, F., Jaeger, P. F., Kohl, S. A. A., Petersen, J., & Maier-Hein, K. H. (2020). nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation. Nature Methods, 18(2), 203-211. doi:10.1038/s41592-020-01008-

    Boundary-RL: Reinforcement Learning for Weakly-Supervised Prostate Segmentation in TRUS Images

    Full text link
    We propose Boundary-RL, a novel weakly supervised segmentation method that utilises only patch-level labels for training. We envision the segmentation as a boundary detection problem, rather than a pixel-level classification as in previous works. This outlook on segmentation may allow for boundary delineation under challenging scenarios such as where noise artefacts may be present within the region-of-interest (ROI) boundaries, where traditional pixel-level classification-based weakly supervised methods may not be able to effectively segment the ROI. Particularly of interest, ultrasound images, where intensity values represent acoustic impedance differences between boundaries, may also benefit from the boundary delineation approach. Our method uses reinforcement learning to train a controller function to localise boundaries of ROIs using a reward derived from a pre-trained boundary-presence classifier. The classifier indicates when an object boundary is encountered within a patch, as the controller modifies the patch location in a sequential Markov decision process. The classifier itself is trained using only binary patch-level labels of object presence, which are the only labels used during training of the entire boundary delineation framework, and serves as a weak signal to inform the boundary delineation. The use of a controller function ensures that a sliding window over the entire image is not necessary. It also prevents possible false-positive or -negative cases by minimising number of patches passed to the boundary-presence classifier. We evaluate our proposed approach for a clinically relevant task of prostate gland segmentation on trans-rectal ultrasound images. We show improved performance compared to other tested weakly supervised methods, using the same labels e.g., multiple instance learning.Comment: Accepted to MICCAI Workshop MLMI 2023 (14th International Conference on Machine Learning in Medical Imaging

    Deep learning enables prostate mri segmentation: a large cohort evaluation with inter-rater variability analysis

    Get PDF
    Whole-prostate gland (WPG) segmentation plays a significant role in prostate volume measurement, treatment, and biopsy planning. This study evaluated a previously developed automatic WPG segmentation, deep attentive neural network (DANN), on a large, continuous patient cohort to test its feasibility in a clinical setting. With IRB approval and HIPAA compliance, the study cohort included 3,698 3T MRI scans acquired between 2016 and 2020. In total, 335 MRI scans were used to train the model, and 3,210 and 100 were used to conduct the qualitative and quantitative evaluation of the model. In addition, the DANN-enabled prostate volume estimation was evaluated by using 50 MRI scans in comparison with manual prostate volume estimation. For qualitative evaluation, visual grading was used to evaluate the performance of WPG segmentation by two abdominal radiologists, and DANN demonstrated either acceptable or excellent performance in over 96% of the testing cohort on the WPG or each prostate sub-portion (apex, midgland, or base). Two radiologists reached a substantial agreement on WPG and midgland segmentation (Îș=0.75 and 0.63) and moderate agreement on apex and base segmentation (Îș=0.56 and 0.60). For quantitative evaluation, DANN demonstrated a dice similarity coefficient of 0.93±0.02, significantly higher than other baseline methods, such as Deeplab v3+ and UNet (both p values < 0.05). For the volume measurement, 96% of the evaluation cohort achieved differences between the DANN-enabled and manual volume measurement within 95% limits of agreement. In conclusion, the study showed that the DANN achieved sufficient and consistent WPG segmentation on a large, continuous study cohort, demonstrating its great potential to serve as a tool to measure prostate volume

    Software and Hardware-based Tools for Improving Ultrasound Guided Prostate Brachytherapy

    Get PDF
    Minimally invasive procedures for prostate cancer diagnosis and treatment, including biopsy and brachytherapy, rely on medical imaging such as two-dimensional (2D) and three-dimensional (3D) transrectal ultrasound (TRUS) and magnetic resonance imaging (MRI) for critical tasks such as target definition and diagnosis, treatment guidance, and treatment planning. Use of these imaging modalities introduces challenges including time-consuming manual prostate segmentation, poor needle tip visualization, and variable MR-US cognitive fusion. The objective of this thesis was to develop, validate, and implement software- and hardware-based tools specifically designed for minimally invasive prostate cancer procedures to overcome these challenges. First, a deep learning-based automatic 3D TRUS prostate segmentation algorithm was developed and evaluated using a diverse dataset of clinical images acquired during prostate biopsy and brachytherapy procedures. The algorithm significantly outperformed state-of-the-art fully 3D CNNs trained using the same dataset while a segmentation time of 0.62 s demonstrated a significant reduction compared to manual segmentation. Next, the impact of dataset size, image quality, and image type on segmentation performance using this algorithm was examined. Using smaller training datasets, segmentation accuracy was shown to plateau with as little as 1000 training images, supporting the use of deep learning approaches even when data is scarce. The development of an image quality grading scale specific to 3D TRUS images will allow for easier comparison between algorithms trained using different datasets. Third, a power Doppler (PD) US-based needle tip localization method was developed and validated in both phantom and clinical cases, demonstrating reduced tip error and variation for obstructed needles compared to conventional US. Finally, a surface-based MRI-3D TRUS deformable image registration algorithm was developed and implemented clinically, demonstrating improved registration accuracy compared to manual rigid registration and reduced variation compared to the current clinical standard of physician cognitive fusion. These generalizable and easy-to-implement tools have the potential to improve workflow efficiency and accuracy for minimally invasive prostate procedures

    End-to-end Prostate Cancer Detection in bpMRI via 3D CNNs: Effects of Attention Mechanisms, Clinical Priori and Decoupled False Positive Reduction

    Full text link
    We present a multi-stage 3D computer-aided detection and diagnosis (CAD) model for automated localization of clinically significant prostate cancer (csPCa) in bi-parametric MR imaging (bpMRI). Deep attention mechanisms drive its detection network, targeting salient structures and highly discriminative feature dimensions across multiple resolutions. Its goal is to accurately identify csPCa lesions from indolent cancer and the wide range of benign pathology that can afflict the prostate gland. Simultaneously, a decoupled residual classifier is used to achieve consistent false positive reduction, without sacrificing high sensitivity or computational efficiency. In order to guide model generalization with domain-specific clinical knowledge, a probabilistic anatomical prior is used to encode the spatial prevalence and zonal distinction of csPCa. Using a large dataset of 1950 prostate bpMRI paired with radiologically-estimated annotations, we hypothesize that such CNN-based models can be trained to detect biopsy-confirmed malignancies in an independent cohort. For 486 institutional testing scans, the 3D CAD system achieves 83.69±\pm5.22% and 93.19±\pm2.96% detection sensitivity at 0.50 and 1.46 false positive(s) per patient, respectively, with 0.882±\pm0.030 AUROC in patient-based diagnosis −-significantly outperforming four state-of-the-art baseline architectures (U-SEResNet, UNet++, nnU-Net, Attention U-Net) from recent literature. For 296 external biopsy-confirmed testing scans, the ensembled CAD system shares moderate agreement with a consensus of expert radiologists (76.69%; kappakappa == 0.51±\pm0.04) and independent pathologists (81.08%; kappakappa == 0.56±\pm0.06); demonstrating strong generalization to histologically-confirmed csPCa diagnosis.Comment: Accepted to MedIA: Medical Image Analysis. This manuscript incorporates and expands upon our 2020 Medical Imaging Meets NeurIPS Workshop paper (arXiv:2011.00263
    corecore