35 research outputs found

    Imaging and pathology findings after an initial negative MRI-US fusion-guided and 12-core extended sextant prostate biopsy session

    Get PDF
    PURPOSEA magnetic resonance imaging-ultrasonography (MRI-US) fusion-guided prostate biopsy increases detection rates compared to an extended sextant biopsy. The imaging characteristics and pathology outcomes of subsequent biopsies in patients with initially negative MRI-US fusion biopsies are described in this study.MATERIALS AND METHODSWe reviewed 855 biopsy sessions of 751 patients (June 2007 to March 2013). The fusion biopsy consisted of two cores per lesion identified on multiparametric MRI (mpMRI) and a 12-core extended sextant transrectal US (TRUS) biopsy. Inclusion criteria were at least two fusion biopsy sessions, with a negative first biopsy and mpMRI before each.RESULTSThe detection rate on the initial fusion biopsy was 55.3%; 336 patients had negative findings. Forty-one patients had follow-up fusion biopsies, but only 34 of these were preceded by a repeat mpMRI. The median interval between biopsies was 15 months. Fourteen patients (41%) were positive for cancer on the repeat MRI-US fusion biopsy. Age, prostate-specific antigen (PSA), prostate volume, PSA density, digital rectal exam findings, lesion diameter, and changes on imaging were comparable between patients with negative and positive rebiopsies. Of the patients with positive rebiopsies, 79% had a positive TRUS biopsy before referral (P = 0.004). Ten patients had Gleason 3+3 disease, three had 3+4 disease, and one had 4+4 disease.CONCLUSIONIn patients with a negative MRI-US fusion prostate biopsy and indications for repeat biopsy, the detection rate of the follow-up sessions was lower than the initial detection rate. Of the prostate cancers subsequently found, 93% were low grade (≤3+4). In this low risk group of patients, increasing the follow-up time interval should be considered in the appropriate clinical setting

    Is Visual Registration Equivalent to Semiautomated Registration in Prostate Biopsy?

    No full text
    In magnetic resonance iimaging- (MRI-) ultrasound (US) guided biopsy, suspicious lesions are identified on MRI, registered on US, and targeted during biopsy. The registration can be performed either by a human operator (visual registration) or by fusion software. Previous studies showed that software registration is fairly accurate in locating suspicious lesions and helps to improve the cancer detection rate. Here, the performance of visual registration was examined for ability to locate suspicious lesions defined on MRI. This study consists of 45 patients. Two operators with differing levels of experience (<1 and 18 years) performed visual registration. The overall spatial difference by the two operators in 72 measurements was 10.6 ± 6.0 mm. Each operator showed a spatial difference of 9.4 ± 5.1 mm (experienced; 39 lesions) and 12.1 ± 6.6 mm (inexperienced; 33 lesions), respectively. In a head-to-head comparison of the same 16 lesions from 12 patients, the spatial differences were 9.7 mm ± 4.9 mm (experienced) and 13.4 mm ± 7.4 mm (inexperienced). There were significant differences between the two operators (unpaired, P value = 0.042; paired, P value = 0.044). The substantial differences by the two operators suggest that visual registration could improperly and inaccurately target many tumors, thereby potentially leading to missed diagnosis or false characterization on pathology

    Fully automated prostate whole gland and central gland segmentation on MRI using holistically nested networks with short connections

    No full text
    © 2019 Society of Photo-Optical Instrumentation Engineers (SPIE). Accurate and automated prostate whole gland and central gland segmentations on MR images are essential for aiding any prostate cancer diagnosis system. Our work presents a 2-D orthogonal deep learning method to automatically segment the whole prostate and central gland from T2-weighted axial-only MR images. The proposed method can generate high-density 3-D surfaces from low-resolution (z axis) MR images. In the past, most methods have focused on axial images alone, e.g., 2-D based segmentation of the prostate from each 2-D slice. Those methods suffer the problems of over-segmenting or under-segmenting the prostate at apex and base, which adds a major contribution for errors. The proposed method leverages the orthogonal context to effectively reduce the apex and base segmentation ambiguities. It also overcomes jittering or stair-step surface artifacts when constructing a 3-D surface from 2-D segmentation or direct 3-D segmentation approaches, such as 3-D U-Net. The experimental results demonstrate that the proposed method achieves 92.4 % ± 3 % Dice similarity coefficient (DSC) for prostate and DSC of 90.1 % ± 4.6 % for central gland without trimming any ending contours at apex and base. The experiments illustrate the feasibility and robustness of the 2-D-based holistically nested networks with short connections method for MR prostate and central gland segmentation. The proposed method achieves segmentation results on par with the current literature

    Automatic magnetic resonance prostate segmentation by deep learning with holistically nested networks

    No full text
    © 2017 Society of Photo-Optical Instrumentation Engineers (SPIE). Accurate automatic segmentation of the prostate in magnetic resonance images (MRI) is a challenging task due to the high variability of prostate anatomic structure. Artifacts such as noise and similar signal intensity of tissues around the prostate boundary inhibit traditional segmentation methods from achieving high accuracy. We investigate both patch-based and holistic (image-to-image) deep-learning methods for segmentation of the prostate. First, we introduce a patch-based convolutional network that aims to refine the prostate contour which provides an initialization. Second, we propose a method for end-to-end prostate segmentation by integrating holistically nested edge detection with fully convolutional networks. Holistically nested networks (HNN) automatically learn a hierarchical representation that can improve prostate boundary detection. Quantitative evaluation is performed on the MRI scans of 250 patients in fivefold cross-validation. The proposed enhanced HNN model achieves a mean ± standard deviation. A Dice similarity coefficient (DSC) of 89.77%±3.29% and a mean Jaccard similarity coefficient (IoU) of 81.59%±5.18% are used to calculate without trimming any end slices. The proposed holistic model significantly (p\u3c0.001) outperforms a patch-based AlexNet model by 9% in DSC and 13% in IoU. Overall, the method achieves state-of-the-art performance as compared with other MRI prostate segmentation methods in the literature

    Deep Learning Based Staging of Bone Lesions From Computed Tomography Scans

    No full text
    In this study, we formulated an efficient deep learning-based classification strategy for characterizing metastatic bone lesions using computed tomography scans (CTs) of prostate cancer patients. For this purpose, 2,880 annotated bone lesions from CT scans of 114 patients diagnosed with prostate cancer were used for training, validation, and final evaluation. These annotations were in the form of lesion full segmentation, lesion type and labels of either benign or malignant. In this work, we present our approach in developing the state-of-the-art model to classify bone lesions as benign or malignant, where (1) we introduce a valuable dataset to address a clinically important problem, (2) we increase the reliability of our model by patient-level stratification of our dataset following lesion-aware distribution at each of the training, validation, and test splits, (3) we explore the impact of lesion texture, morphology, size, location, and volumetric information on the classification performance, (4) we investigate the functionality of lesion classification using different algorithms including lesion-based average 2D ResNet-50, lesion-based average 2D ResNeXt-50, 3D ResNet-18, 3D ResNet-50, as well as the ensemble of 2D ResNet-50 and 3D ResNet-18. For this purpose, we employed a train/validation/test split equal to 75&#x0025;/12&#x0025;/13&#x0025; with several data augmentation methods applied to the training dataset to avoid overfitting and to increase reliability. We achieved an accuracy of 92.2&#x0025; for correct classification of benign vs. malignant bone lesions in the test set using an ensemble of lesion-based average 2D ResNet-50 and 3D ResNet-18 with texture, volumetric information, and morphology having the greatest discriminative power respectively. To the best of our knowledge, this is the highest ever achieved lesion-level accuracy having a very comprehensive data set for such a clinically important problem. This level of classification performance in the early stages of metastasis development bodes well for clinical translation of this strategy

    MATERIALS AND METHODS

    No full text
    Imaging and pathology findings after an initial negative MRI-US fusion-guided and 12-core extended sextant prostate biopsy sessio
    corecore