322,863 research outputs found

    Robust Resolution-Enhanced Prostate Segmentation in Magnetic Resonance and Ultrasound Images through Convolutional Neural Networks

    Full text link
    [EN] Prostate segmentations are required for an ever-increasing number of medical applications, such as image-based lesion detection, fusion-guided biopsy and focal therapies. However, obtaining accurate segmentations is laborious, requires expertise and, even then, the inter-observer variability remains high. In this paper, a robust, accurate and generalizable model for Magnetic Resonance (MR) and three-dimensional (3D) Ultrasound (US) prostate image segmentation is proposed. It uses a densenet-resnet-based Convolutional Neural Network (CNN) combined with techniques such as deep supervision, checkpoint ensembling and Neural Resolution Enhancement. The MR prostate segmentation model was trained with five challenging and heterogeneous MR prostate datasets (and two US datasets), with segmentations from many different experts with varying segmentation criteria. The model achieves a consistently strong performance in all datasets independently (mean Dice Similarity Coefficient -DSC- above 0.91 for all datasets except for one), outperforming the inter-expert variability significantly in MR (mean DSC of 0.9099 vs. 0.8794). When evaluated on the publicly available Promise12 challenge dataset, it attains a similar performance to the best entries. In summary, the model has the potential of having a significant impact on current prostate procedures, undercutting, and even eliminating, the need of manual segmentations through improvements in terms of robustness, generalizability and output resolutionThis work has been partially supported by a doctoral grant of the Spanish Ministry of Innovation and Science, with reference FPU17/01993Pellicer-Valero, OJ.; González-Pérez, V.; Casanova Ramón-Borja, JL.; Martín García, I.; Barrios Benito, M.; Pelechano Gómez, P.; Rubio-Briones, J.... (2021). Robust Resolution-Enhanced Prostate Segmentation in Magnetic Resonance and Ultrasound Images through Convolutional Neural Networks. Applied Sciences. 11(2):1-17. https://doi.org/10.3390/app11020844S117112Marra, G., Ploussard, G., Futterer, J., & Valerio, M. (2019). Controversies in MR targeted biopsy: alone or combined, cognitive versus software-based fusion, transrectal versus transperineal approach? World Journal of Urology, 37(2), 277-287. doi:10.1007/s00345-018-02622-5Ahdoot, M., Lebastchi, A. H., Turkbey, B., Wood, B., & Pinto, P. A. (2019). Contemporary treatments in prostate cancer focal therapy. Current Opinion in Oncology, 31(3), 200-206. doi:10.1097/cco.0000000000000515Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2017). ImageNet classification with deep convolutional neural networks. Communications of the ACM, 60(6), 84-90. doi:10.1145/3065386Allen, P. D., Graham, J., Williamson, D. C., & Hutchinson, C. E. (s. f.). Differential Segmentation of the Prostate in MR Images Using Combined 3D Shape Modelling and Voxel Classification. 3rd IEEE International Symposium on Biomedical Imaging: Macro to Nano, 2006. doi:10.1109/isbi.2006.1624940Freedman, D., Radke, R. J., Tao Zhang, Yongwon Jeong, Lovelock, D. M., & Chen, G. T. Y. (2005). Model-based segmentation of medical imagery by matching distributions. IEEE Transactions on Medical Imaging, 24(3), 281-292. doi:10.1109/tmi.2004.841228Klein, S., van der Heide, U. A., Lips, I. M., van Vulpen, M., Staring, M., & Pluim, J. P. W. (2008). Automatic segmentation of the prostate in 3D MR images by atlas matching using localized mutual information. Medical Physics, 35(4), 1407-1417. doi:10.1118/1.2842076Ronneberger, O., Fischer, P., & Brox, T. (2015). U-Net: Convolutional Networks for Biomedical Image Segmentation. Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015, 234-241. doi:10.1007/978-3-319-24574-4_28He, K., Gkioxari, G., Dollar, P., & Girshick, R. (2017). Mask R-CNN. 2017 IEEE International Conference on Computer Vision (ICCV). doi:10.1109/iccv.2017.322Shelhamer, E., Long, J., & Darrell, T. (2017). Fully Convolutional Networks for Semantic Segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39(4), 640-651. doi:10.1109/tpami.2016.2572683He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep Residual Learning for Image Recognition. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). doi:10.1109/cvpr.2016.90Milletari, F., Navab, N., & Ahmadi, S.-A. (2016). V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation. 2016 Fourth International Conference on 3D Vision (3DV). doi:10.1109/3dv.2016.79Zhu, Q., Du, B., Turkbey, B., Choyke, P. L., & Yan, P. (2017). Deeply-supervised CNN for prostate segmentation. 2017 International Joint Conference on Neural Networks (IJCNN). doi:10.1109/ijcnn.2017.7965852To, M. N. N., Vu, D. Q., Turkbey, B., Choyke, P. L., & Kwak, J. T. (2018). Deep dense multi-path neural network for prostate segmentation in magnetic resonance imaging. International Journal of Computer Assisted Radiology and Surgery, 13(11), 1687-1696. doi:10.1007/s11548-018-1841-4Huang, G., Liu, Z., Van Der Maaten, L., & Weinberger, K. Q. (2017). Densely Connected Convolutional Networks. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). doi:10.1109/cvpr.2017.243Zhu, Y., Wei, R., Gao, G., Ding, L., Zhang, X., Wang, X., & Zhang, J. (2018). Fully automatic segmentation on prostate MR images based on cascaded fully convolution network. Journal of Magnetic Resonance Imaging, 49(4), 1149-1156. doi:10.1002/jmri.26337Wang, Y., Ni, D., Dou, H., Hu, X., Zhu, L., Yang, X., … Wang, T. (2019). Deep Attentive Features for Prostate Segmentation in 3D Transrectal Ultrasound. IEEE Transactions on Medical Imaging, 38(12), 2768-2778. doi:10.1109/tmi.2019.2913184Lemaître, G., Martí, R., Freixenet, J., Vilanova, J. C., Walker, P. M., & Meriaudeau, F. (2015). Computer-Aided Detection and diagnosis for prostate cancer based on mono and multi-parametric MRI: A review. Computers in Biology and Medicine, 60, 8-31. doi:10.1016/j.compbiomed.2015.02.009Litjens, G., Toth, R., van de Ven, W., Hoeks, C., Kerkstra, S., van Ginneken, B., … Madabhushi, A. (2014). Evaluation of prostate segmentation algorithms for MRI: The PROMISE12 challenge. Medical Image Analysis, 18(2), 359-373. doi:10.1016/j.media.2013.12.002Zhu, Q., Du, B., & Yan, P. (2020). Boundary-Weighted Domain Adaptive Neural Network for Prostate MR Image Segmentation. IEEE Transactions on Medical Imaging, 39(3), 753-763. doi:10.1109/tmi.2019.2935018He, K., Zhang, X., Ren, S., & Sun, J. (2015). Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification. 2015 IEEE International Conference on Computer Vision (ICCV). doi:10.1109/iccv.2015.123Pan, S. J., & Yang, Q. (2010). A Survey on Transfer Learning. IEEE Transactions on Knowledge and Data Engineering, 22(10), 1345-1359. doi:10.1109/tkde.2009.191Smith, L. N. (2017). Cyclical Learning Rates for Training Neural Networks. 2017 IEEE Winter Conference on Applications of Computer Vision (WACV). doi:10.1109/wacv.2017.58Abraham, N., & Khan, N. M. (2019). A Novel Focal Tversky Loss Function With Improved Attention U-Net for Lesion Segmentation. 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019). doi:10.1109/isbi.2019.8759329Lei, Y., Tian, S., He, X., Wang, T., Wang, B., Patel, P., … Yang, X. (2019). Ultrasound prostate segmentation based on multidirectional deeply supervised V‐Net. Medical Physics, 46(7), 3194-3206. doi:10.1002/mp.13577Orlando, N., Gillies, D. J., Gyacskov, I., Romagnoli, C., D’Souza, D., & Fenster, A. (2020). Automatic prostate segmentation using deep learning on clinically diverse 3D transrectal ultrasound images. Medical Physics, 47(6), 2413-2426. doi:10.1002/mp.14134Karimi, D., Zeng, Q., Mathur, P., Avinash, A., Mahdavi, S., Spadinger, I., … Salcudean, S. E. (2019). Accurate and robust deep learning-based segmentation of the prostate clinical target volume in ultrasound images. Medical Image Analysis, 57, 186-196. doi:10.1016/j.media.2019.07.005PROMISE12 Resultshttps://promise12.grand-challenge.org/Isensee, F., Jaeger, P. F., Kohl, S. A. A., Petersen, J., & Maier-Hein, K. H. (2020). nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation. Nature Methods, 18(2), 203-211. doi:10.1038/s41592-020-01008-

    PWC-Net: CNNs for Optical Flow Using Pyramid, Warping, and Cost Volume

    Full text link
    We present a compact but effective CNN model for optical flow, called PWC-Net. PWC-Net has been designed according to simple and well-established principles: pyramidal processing, warping, and the use of a cost volume. Cast in a learnable feature pyramid, PWC-Net uses the cur- rent optical flow estimate to warp the CNN features of the second image. It then uses the warped features and features of the first image to construct a cost volume, which is processed by a CNN to estimate the optical flow. PWC-Net is 17 times smaller in size and easier to train than the recent FlowNet2 model. Moreover, it outperforms all published optical flow methods on the MPI Sintel final pass and KITTI 2015 benchmarks, running at about 35 fps on Sintel resolution (1024x436) images. Our models are available on https://github.com/NVlabs/PWC-Net.Comment: CVPR 2018 camera ready version (with github link to Caffe and PyTorch code

    A discussion on the validation tests employed to compare human action recognition methods using the MSR Action3D dataset

    Get PDF
    This paper aims to determine which is the best human action recognition method based on features extracted from RGB-D devices, such as the Microsoft Kinect. A review of all the papers that make reference to MSR Action3D, the most used dataset that includes depth information acquired from a RGB-D device, has been performed. We found that the validation method used by each work differs from the others. So, a direct comparison among works cannot be made. However, almost all the works present their results comparing them without taking into account this issue. Therefore, we present different rankings according to the methodology used for the validation in orden to clarify the existing confusion.Comment: 16 pages and 7 table

    STV-based Video Feature Processing for Action Recognition

    Get PDF
    In comparison to still image-based processes, video features can provide rich and intuitive information about dynamic events occurred over a period of time, such as human actions, crowd behaviours, and other subject pattern changes. Although substantial progresses have been made in the last decade on image processing and seen its successful applications in face matching and object recognition, video-based event detection still remains one of the most difficult challenges in computer vision research due to its complex continuous or discrete input signals, arbitrary dynamic feature definitions, and the often ambiguous analytical methods. In this paper, a Spatio-Temporal Volume (STV) and region intersection (RI) based 3D shape-matching method has been proposed to facilitate the definition and recognition of human actions recorded in videos. The distinctive characteristics and the performance gain of the devised approach stemmed from a coefficient factor-boosted 3D region intersection and matching mechanism developed in this research. This paper also reported the investigation into techniques for efficient STV data filtering to reduce the amount of voxels (volumetric-pixels) that need to be processed in each operational cycle in the implemented system. The encouraging features and improvements on the operational performance registered in the experiments have been discussed at the end
    corecore