152 research outputs found

    Measurement Variability in Treatment Response Determination for Non-Small Cell Lung Cancer: Improvements using Radiomics

    Get PDF
    Multimodality imaging measurements of treatment response are critical for clinical practice, oncology trials, and the evaluation of new treatment modalities. The current standard for determining treatment response in non-small cell lung cancer (NSCLC) is based on tumor size using the RECIST criteria. Molecular targeted agents and immunotherapies often cause morphological change without reduction of tumor size. Therefore, it is difficult to evaluate therapeutic response by conventional methods. Radiomics is the study of cancer imaging features that are extracted using machine learning and other semantic features. This method can provide comprehensive information on tumor phenotypes and can be used to assess therapeutic response in this new age of immunotherapy. Delta radiomics, which evaluates the longitudinal changes in radiomics features, shows potential in gauging treatment response in NSCLC. It is well known that quantitative measurement methods may be subject to substantial variability due to differences in technical factors and require standardization. In this review, we describe measurement variability in the evaluation of NSCLC and the emerging role of radiomics. © 2019 Wolters Kluwer Health, Inc. All rights reserved

    Radiomics in Lung Cancer from Basic to Advanced: Current Status and Future Directions

    Get PDF
    Copyright © 2020 The Korean Society of Radiology.Ideally, radiomics features and radiomics signatures can be used as imaging biomarkers for diagnosis, staging, prognosis, and prediction of tumor response. Thus, the number of published radiomics studies is increasing exponentially, leading to a myriad of new radiomics-based evidence for lung cancer. Consequently, it is challenging for radiologists to keep up with the development of radiomics features and their clinical applications. In this article, we review the basics to advanced radiomics in lung cancer to guide young researchers who are eager to start exploring radiomics investigations. In addition, we also include technical issues of radiomics, because knowledge of the technical aspects of radiomics supports a well-informed interpretation of the use of radiomics in lung cancer11Nsciescopuskc

    Oncologic Imaging and Radiomics: A Walkthrough Review of Methodological Challenges

    Get PDF
    Imaging plays a crucial role in the management of oncologic patients, from the initial diagnosis to staging and treatment response monitoring. Recently, it has been suggested that its importance could be further increased by accessing a new layer of previously hidden quantitative data at the pixel level. Using a multi-step process, radiomics extracts potential biomarkers from medical images that could power decision support tools. Despite the growing interest and rising number of research articles being published, radiomics is still far from fulfilling its promise of guiding oncologic imaging toward personalized medicine. This is, at least partly, due to the heterogeneous methodological quality in radiomic research, caused by the complexity of the analysis pipelines. In this review, we aim to disentangle this complexity with a stepwise approach. Specifically, we focus on challenges to face during image preprocessing and segmentation, how to handle imbalanced classes and avoid information leaks, as well as strategies for the proper validation of findings

    Prostate Segmentation and Regions of Interest Detection in Transrectal Ultrasound Images

    Get PDF
    The early detection of prostate cancer plays a significant role in the success of treatment and outcome. To detect prostate cancer, imaging modalities such as TransRectal UltraSound (TRUS) and Magnetic Resonance Imaging (MRI) are relied on. MRI images are more comprehensible than TRUS images which are corrupted by noise such as speckles and shadowing. However, MRI screening is costly, often unavailable in many community hospitals, time consuming, and requires more patient preparation time. Therefore, TRUS is more popular for screening and biopsy guidance for prostate cancer. For these reasons, TRUS images are chosen in this research. Radiologists first segment the prostate image from ultrasound image and then identify the hypoechoic regions which are more likely to exhibit cancer and should be considered for biopsy. In this thesis, the focus is on prostate segmentation and on Regions of Interest (ROI)segmentation. First, the extraneous tissues surrounding the prostate gland are eliminated. Consequently, the process of detecting the cancerous regions is focused on the prostate gland only. Thus, the diagnosing process is significantly shortened. Also, segmentation techniques such as thresholding, region growing, classification, clustering, Markov random field models, artificial neural networks (ANNs), atlas-guided, and deformable models are investigated. In this dissertation, the deformable model technique is selected because it is capable of segmenting difficult images such as ultrasound images. Deformable models are classified as either parametric or geometric deformable models. For the prostate segmentation, one of the parametric deformable models, Gradient Vector Flow (GVF) deformable contour, is adopted because it is capable of segmenting the prostate gland, even if the initial contour is not close to the prostate boundary. The manual segmentation of ultrasound images not only consumes much time and effort, but also leads to operator-dependent results. Therefore, a fully automatic prostate segmentation algorithm is proposed based on knowledge-based rules. The new algorithm results are evaluated with respect to their manual outlining by using distance-based and area-based metrics. Also, the novel technique is compared with two well-known semi-automatic algorithms to illustrate its superiority. With hypothesis testing, the proposed algorithm is statistically superior to the other two algorithms. The newly developed algorithm is operator-independent and capable of accurately segmenting a prostate gland with any shape and orientation from the ultrasound image. The focus of the second part of the research is to locate the regions which are more prone to cancer. Although the parametric dynamic contour technique can readily segment a single region, it is not conducive for segmenting multiple regions, as required in the regions of interest (ROI) segmentation part. Since the number of regions is not known beforehand, the problem is stated as 3D one by using level set approach to handle the topology changes such as splitting and merging the contours. For the proposed ROI segmentation algorithm, one of the geometric deformable models, active contours without edges, is used. This technique is capable of segmenting the regions with either weak edges, or even, no edges at all. The results of the proposed ROI segmentation algorithm are compared with those of the two experts' manual marking. The results are also compared with the common regions manually marked by both experts and with the total regions marked by either expert. The proposed ROI segmentation algorithm is also evaluated by using region-based and pixel-based strategies. The evaluation results indicate that the proposed algorithm produces similar results to those of the experts' manual markings, but with the added advantages of being fast and reliable. This novel algorithm also detects some regions that have been missed by one expert but confirmed by the other. In conclusion, the two newly devised algorithms can assist experts in segmenting the prostate image and detecting the suspicious abnormal regions that should be considered for biopsy. This leads to the reduction the number of biopsies, early detection of the diseased regions, proper management, and possible reduction of death related to prostate cancer

    Artificial intelligence in cancer imaging: Clinical challenges and applications

    Get PDF
    Judgement, as one of the core tenets of medicine, relies upon the integration of multilayered data with nuanced decision making. Cancer offers a unique context for medical decisions given not only its variegated forms with evolution of disease but also the need to take into account the individual condition of patients, their ability to receive treatment, and their responses to treatment. Challenges remain in the accurate detection, characterization, and monitoring of cancers despite improved technologies. Radiographic assessment of disease most commonly relies upon visual evaluations, the interpretations of which may be augmented by advanced computational analyses. In particular, artificial intelligence (AI) promises to make great strides in the qualitative interpretation of cancer imaging by expert clinicians, including volumetric delineation of tumors over time, extrapolation of the tumor genotype and biological course from its radiographic phenotype, prediction of clinical outcome, and assessment of the impact of disease and treatment on adjacent organs. AI may automate processes in the initial interpretation of images and shift the clinical workflow of radiographic detection, management decisions on whether or not to administer an intervention, and subsequent observation to a yet to be envisioned paradigm. Here, the authors review the current state of AI as applied to medical imaging of cancer and describe advances in 4 tumor types (lung, brain, breast, and prostate) to illustrate how common clinical problems are being addressed. Although most studies evaluating AI applications in oncology to date have not been vigorously validated for reproducibility and generalizability, the results do highlight increasingly concerted efforts in pushing AI technology to clinical use and to impact future directions in cancer care

    Robust Resolution-Enhanced Prostate Segmentation in Magnetic Resonance and Ultrasound Images through Convolutional Neural Networks

    Full text link
    [EN] Prostate segmentations are required for an ever-increasing number of medical applications, such as image-based lesion detection, fusion-guided biopsy and focal therapies. However, obtaining accurate segmentations is laborious, requires expertise and, even then, the inter-observer variability remains high. In this paper, a robust, accurate and generalizable model for Magnetic Resonance (MR) and three-dimensional (3D) Ultrasound (US) prostate image segmentation is proposed. It uses a densenet-resnet-based Convolutional Neural Network (CNN) combined with techniques such as deep supervision, checkpoint ensembling and Neural Resolution Enhancement. The MR prostate segmentation model was trained with five challenging and heterogeneous MR prostate datasets (and two US datasets), with segmentations from many different experts with varying segmentation criteria. The model achieves a consistently strong performance in all datasets independently (mean Dice Similarity Coefficient -DSC- above 0.91 for all datasets except for one), outperforming the inter-expert variability significantly in MR (mean DSC of 0.9099 vs. 0.8794). When evaluated on the publicly available Promise12 challenge dataset, it attains a similar performance to the best entries. In summary, the model has the potential of having a significant impact on current prostate procedures, undercutting, and even eliminating, the need of manual segmentations through improvements in terms of robustness, generalizability and output resolutionThis work has been partially supported by a doctoral grant of the Spanish Ministry of Innovation and Science, with reference FPU17/01993Pellicer-Valero, OJ.; González-Pérez, V.; Casanova Ramón-Borja, JL.; Martín García, I.; Barrios Benito, M.; Pelechano Gómez, P.; Rubio-Briones, J.... (2021). Robust Resolution-Enhanced Prostate Segmentation in Magnetic Resonance and Ultrasound Images through Convolutional Neural Networks. Applied Sciences. 11(2):1-17. https://doi.org/10.3390/app11020844S117112Marra, G., Ploussard, G., Futterer, J., & Valerio, M. (2019). Controversies in MR targeted biopsy: alone or combined, cognitive versus software-based fusion, transrectal versus transperineal approach? World Journal of Urology, 37(2), 277-287. doi:10.1007/s00345-018-02622-5Ahdoot, M., Lebastchi, A. H., Turkbey, B., Wood, B., & Pinto, P. A. (2019). Contemporary treatments in prostate cancer focal therapy. Current Opinion in Oncology, 31(3), 200-206. doi:10.1097/cco.0000000000000515Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2017). ImageNet classification with deep convolutional neural networks. Communications of the ACM, 60(6), 84-90. doi:10.1145/3065386Allen, P. D., Graham, J., Williamson, D. C., & Hutchinson, C. E. (s. f.). Differential Segmentation of the Prostate in MR Images Using Combined 3D Shape Modelling and Voxel Classification. 3rd IEEE International Symposium on Biomedical Imaging: Macro to Nano, 2006. doi:10.1109/isbi.2006.1624940Freedman, D., Radke, R. J., Tao Zhang, Yongwon Jeong, Lovelock, D. M., & Chen, G. T. Y. (2005). Model-based segmentation of medical imagery by matching distributions. IEEE Transactions on Medical Imaging, 24(3), 281-292. doi:10.1109/tmi.2004.841228Klein, S., van der Heide, U. A., Lips, I. M., van Vulpen, M., Staring, M., & Pluim, J. P. W. (2008). Automatic segmentation of the prostate in 3D MR images by atlas matching using localized mutual information. Medical Physics, 35(4), 1407-1417. doi:10.1118/1.2842076Ronneberger, O., Fischer, P., & Brox, T. (2015). U-Net: Convolutional Networks for Biomedical Image Segmentation. Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015, 234-241. doi:10.1007/978-3-319-24574-4_28He, K., Gkioxari, G., Dollar, P., & Girshick, R. (2017). Mask R-CNN. 2017 IEEE International Conference on Computer Vision (ICCV). doi:10.1109/iccv.2017.322Shelhamer, E., Long, J., & Darrell, T. (2017). Fully Convolutional Networks for Semantic Segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39(4), 640-651. doi:10.1109/tpami.2016.2572683He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep Residual Learning for Image Recognition. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). doi:10.1109/cvpr.2016.90Milletari, F., Navab, N., & Ahmadi, S.-A. (2016). V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation. 2016 Fourth International Conference on 3D Vision (3DV). doi:10.1109/3dv.2016.79Zhu, Q., Du, B., Turkbey, B., Choyke, P. L., & Yan, P. (2017). Deeply-supervised CNN for prostate segmentation. 2017 International Joint Conference on Neural Networks (IJCNN). doi:10.1109/ijcnn.2017.7965852To, M. N. N., Vu, D. Q., Turkbey, B., Choyke, P. L., & Kwak, J. T. (2018). Deep dense multi-path neural network for prostate segmentation in magnetic resonance imaging. International Journal of Computer Assisted Radiology and Surgery, 13(11), 1687-1696. doi:10.1007/s11548-018-1841-4Huang, G., Liu, Z., Van Der Maaten, L., & Weinberger, K. Q. (2017). Densely Connected Convolutional Networks. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). doi:10.1109/cvpr.2017.243Zhu, Y., Wei, R., Gao, G., Ding, L., Zhang, X., Wang, X., & Zhang, J. (2018). Fully automatic segmentation on prostate MR images based on cascaded fully convolution network. Journal of Magnetic Resonance Imaging, 49(4), 1149-1156. doi:10.1002/jmri.26337Wang, Y., Ni, D., Dou, H., Hu, X., Zhu, L., Yang, X., … Wang, T. (2019). Deep Attentive Features for Prostate Segmentation in 3D Transrectal Ultrasound. IEEE Transactions on Medical Imaging, 38(12), 2768-2778. doi:10.1109/tmi.2019.2913184Lemaître, G., Martí, R., Freixenet, J., Vilanova, J. C., Walker, P. M., & Meriaudeau, F. (2015). Computer-Aided Detection and diagnosis for prostate cancer based on mono and multi-parametric MRI: A review. Computers in Biology and Medicine, 60, 8-31. doi:10.1016/j.compbiomed.2015.02.009Litjens, G., Toth, R., van de Ven, W., Hoeks, C., Kerkstra, S., van Ginneken, B., … Madabhushi, A. (2014). Evaluation of prostate segmentation algorithms for MRI: The PROMISE12 challenge. Medical Image Analysis, 18(2), 359-373. doi:10.1016/j.media.2013.12.002Zhu, Q., Du, B., & Yan, P. (2020). Boundary-Weighted Domain Adaptive Neural Network for Prostate MR Image Segmentation. IEEE Transactions on Medical Imaging, 39(3), 753-763. doi:10.1109/tmi.2019.2935018He, K., Zhang, X., Ren, S., & Sun, J. (2015). Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification. 2015 IEEE International Conference on Computer Vision (ICCV). doi:10.1109/iccv.2015.123Pan, S. J., & Yang, Q. (2010). A Survey on Transfer Learning. IEEE Transactions on Knowledge and Data Engineering, 22(10), 1345-1359. doi:10.1109/tkde.2009.191Smith, L. N. (2017). Cyclical Learning Rates for Training Neural Networks. 2017 IEEE Winter Conference on Applications of Computer Vision (WACV). doi:10.1109/wacv.2017.58Abraham, N., & Khan, N. M. (2019). A Novel Focal Tversky Loss Function With Improved Attention U-Net for Lesion Segmentation. 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019). doi:10.1109/isbi.2019.8759329Lei, Y., Tian, S., He, X., Wang, T., Wang, B., Patel, P., … Yang, X. (2019). Ultrasound prostate segmentation based on multidirectional deeply supervised V‐Net. Medical Physics, 46(7), 3194-3206. doi:10.1002/mp.13577Orlando, N., Gillies, D. J., Gyacskov, I., Romagnoli, C., D’Souza, D., & Fenster, A. (2020). Automatic prostate segmentation using deep learning on clinically diverse 3D transrectal ultrasound images. Medical Physics, 47(6), 2413-2426. doi:10.1002/mp.14134Karimi, D., Zeng, Q., Mathur, P., Avinash, A., Mahdavi, S., Spadinger, I., … Salcudean, S. E. (2019). Accurate and robust deep learning-based segmentation of the prostate clinical target volume in ultrasound images. Medical Image Analysis, 57, 186-196. doi:10.1016/j.media.2019.07.005PROMISE12 Resultshttps://promise12.grand-challenge.org/Isensee, F., Jaeger, P. F., Kohl, S. A. A., Petersen, J., & Maier-Hein, K. H. (2020). nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation. Nature Methods, 18(2), 203-211. doi:10.1038/s41592-020-01008-

    Methodology for extensive evaluation of semiautomatic and interactive segmentation algorithms using simulated Interaction models

    Get PDF
    Performance of semiautomatic and interactive segmentation(SIS) algorithms are usually evaluated by employing a small number of human operators to segment the images. The human operators typically provide the approximate location of objects of interest and their boundaries in an interactive phase, which is followed by an automatic phase where the segmentation is performed under the constraints of the operator-provided guidance. The segmentation results produced from this small set of interactions do not represent the true capability and potential of the algorithm being evaluated. For example, due to inter-operator variability, human operators may make choices that may provide either overestimated or underestimated results. As well, their choices may not be realistic when compared to how the algorithm is used in the field, since interaction may be influenced by operator fatigue and lapses in judgement. Other drawbacks to using human operators to assess SIS algorithms, include: human error, the lack of available expert users, and the expense. A methodology for evaluating segmentation performance is proposed here which uses simulated Interaction models to programmatically generate large numbers of interactions to ensure the presence of interactions throughout the object region. These interactions are used to segment the objects of interest and the resulting segmentations are then analysed using statistical methods. The large number of interactions generated by simulated interaction models capture the variabilities existing in the set of user interactions by considering each and every pixel inside the entire region of the object as a potential location for an interaction to be placed with equal probability. Due to the practical limitation imposed by the enormous amount of computation for the enormous number of possible interactions, uniform sampling of interactions at regular intervals is used to generate the subset of all possible interactions which still can represent the diverse pattern of the entire set of interactions. Categorization of interactions into different groups, based on the position of the interaction inside the object region and texture properties of the image region where the interaction is located, provides the opportunity for fine-grained algorithm performance analysis based on these two criteria. Application of statistical hypothesis testing make the analysis more accurate, scientific and reliable in comparison to conventional evaluation of semiautomatic segmentation algorithms. The proposed methodology has been demonstrated by two case studies through implementation of seven different algorithms using three different types of interaction modes making a total of nine segmentation applications to assess the efficacy of the methodology. Application of this methodology has revealed in-depth, fine details about the performance of the segmentation algorithms which currently existing methods could not achieve due to the absence of a large, unbiased set of interactions. Practical application of the methodology for a number of algorithms and diverse interaction modes have shown its feasibility and generality for it to be established as an appropriate methodology. Development of this methodology to be used as a potential application for automatic evaluation of the performance of SIS algorithms looks very promising for users of image segmentation
    corecore