246,989 research outputs found

    Comprehensive Framework for Computer-Aided Prostate Cancer Detection in Multi-Parametric MRI

    Get PDF
    Prostate cancer is the most diagnosed form of cancer and one of the leading causes of cancer death in men, but survival rates are relatively high with sufficiently early diagnosis. The current clinical model for initial prostate cancer screening is invasive and subject to overdiagnosis. As such, the use of magnetic resonance imaging (MRI) has recently grown in popularity as a non-invasive imaging-based prostate cancer screening method. In particular, the use of high volume quantitative radiomic features extracted from multi-parametric MRI is gaining attraction for the auto-detection of prostate tumours since it provides a plethora of mineable data which can be used for both detection and prognosis of prostate cancer. Current image-based cancer detection methods, however, face notable challenges that include noise in MR images, variability between different MRI modalities, weak contrast, and non-homogeneous texture patterns, making it difficult for diagnosticians to identify tumour candidates. In this thesis, a comprehensive framework for computer-aided prostate cancer detection using multi-parametric MRI was introduced. The framework consists of two parts: i) a saliency-based method for identifying suspicious regions in multi-parametric MR prostate images based on statistical texture distinctiveness, and ii) automatic prostate tumour candidate detection using a radiomics-driven conditional random field (RD-CRF). The framework was evaluated using real clinical prostate multi-parametric MRI data from 20 patients, and both parts were compared against state-of-the-art approaches. The suspicious region detection method achieved a 1.5% increase in sensitivity, and a 10% increase in specificity and accuracy over the state-of-the-art method, indicating its potential for more visually meaningful identification of suspicious tumour regions. The RD-CRF method was shown to improve the detection of tumour candidates by mitigating sparsely distributed tumour candidates and improving the detected tumour candidates via spatial consistency and radiomic feature relationships. Thus, the developed framework shows potential for aiding medical professionals with performing more efficient and accurate computer-aided prostate cancer detection

    CMS-RCNN: Contextual Multi-Scale Region-based CNN for Unconstrained Face Detection

    Full text link
    Robust face detection in the wild is one of the ultimate components to support various facial related problems, i.e. unconstrained face recognition, facial periocular recognition, facial landmarking and pose estimation, facial expression recognition, 3D facial model construction, etc. Although the face detection problem has been intensely studied for decades with various commercial applications, it still meets problems in some real-world scenarios due to numerous challenges, e.g. heavy facial occlusions, extremely low resolutions, strong illumination, exceptionally pose variations, image or video compression artifacts, etc. In this paper, we present a face detection approach named Contextual Multi-Scale Region-based Convolution Neural Network (CMS-RCNN) to robustly solve the problems mentioned above. Similar to the region-based CNNs, our proposed network consists of the region proposal component and the region-of-interest (RoI) detection component. However, far apart of that network, there are two main contributions in our proposed network that play a significant role to achieve the state-of-the-art performance in face detection. Firstly, the multi-scale information is grouped both in region proposal and RoI detection to deal with tiny face regions. Secondly, our proposed network allows explicit body contextual reasoning in the network inspired from the intuition of human vision system. The proposed approach is benchmarked on two recent challenging face detection databases, i.e. the WIDER FACE Dataset which contains high degree of variability, as well as the Face Detection Dataset and Benchmark (FDDB). The experimental results show that our proposed approach trained on WIDER FACE Dataset outperforms strong baselines on WIDER FACE Dataset by a large margin, and consistently achieves competitive results on FDDB against the recent state-of-the-art face detection methods

    Occlusion Coherence: Detecting and Localizing Occluded Faces

    Full text link
    The presence of occluders significantly impacts object recognition accuracy. However, occlusion is typically treated as an unstructured source of noise and explicit models for occluders have lagged behind those for object appearance and shape. In this paper we describe a hierarchical deformable part model for face detection and landmark localization that explicitly models part occlusion. The proposed model structure makes it possible to augment positive training data with large numbers of synthetically occluded instances. This allows us to easily incorporate the statistics of occlusion patterns in a discriminatively trained model. We test the model on several benchmarks for landmark localization and detection including challenging new data sets featuring significant occlusion. We find that the addition of an explicit occlusion model yields a detection system that outperforms existing approaches for occluded instances while maintaining competitive accuracy in detection and landmark localization for unoccluded instances

    Multi-Path Region-Based Convolutional Neural Network for Accurate Detection of Unconstrained "Hard Faces"

    Full text link
    Large-scale variations still pose a challenge in unconstrained face detection. To the best of our knowledge, no current face detection algorithm can detect a face as large as 800 x 800 pixels while simultaneously detecting another one as small as 8 x 8 pixels within a single image with equally high accuracy. We propose a two-stage cascaded face detection framework, Multi-Path Region-based Convolutional Neural Network (MP-RCNN), that seamlessly combines a deep neural network with a classic learning strategy, to tackle this challenge. The first stage is a Multi-Path Region Proposal Network (MP-RPN) that proposes faces at three different scales. It simultaneously utilizes three parallel outputs of the convolutional feature maps to predict multi-scale candidate face regions. The "atrous" convolution trick (convolution with up-sampled filters) and a newly proposed sampling layer for "hard" examples are embedded in MP-RPN to further boost its performance. The second stage is a Boosted Forests classifier, which utilizes deep facial features pooled from inside the candidate face regions as well as deep contextual features pooled from a larger region surrounding the candidate face regions. This step is included to further remove hard negative samples. Experiments show that this approach achieves state-of-the-art face detection performance on the WIDER FACE dataset "hard" partition, outperforming the former best result by 9.6% for the Average Precision.Comment: 11 pages, 7 figures, to be presented at CRV 201
    • …
    corecore