20 research outputs found

    Atlas-Based Prostate Segmentation Using an Hybrid Registration

    Full text link
    Purpose: This paper presents the preliminary results of a semi-automatic method for prostate segmentation of Magnetic Resonance Images (MRI) which aims to be incorporated in a navigation system for prostate brachytherapy. Methods: The method is based on the registration of an anatomical atlas computed from a population of 18 MRI exams onto a patient image. An hybrid registration framework which couples an intensity-based registration with a robust point-matching algorithm is used for both atlas building and atlas registration. Results: The method has been validated on the same dataset that the one used to construct the atlas using the "leave-one-out method". Results gives a mean error of 3.39 mm and a standard deviation of 1.95 mm with respect to expert segmentations. Conclusions: We think that this segmentation tool may be a very valuable help to the clinician for routine quantitative image exploitation.Comment: International Journal of Computer Assisted Radiology and Surgery (2008) 000-99

    Topological MRI Prostate Segmentation Method

    Get PDF
    The main aim of this paper is to advance the state of the art in automated prostate segmentation using T2 weighted MR images, by introducing a hybrid topological MRI prostate segmentation method which is based on a set of pre-labeled MR atlas images. The proposed method has been experimentally tested on a set of 30 MRI T2 weighted images. For evaluation the automated segmentations of the proposed scheme have been compared with the manual segmentations, using an average Dice Similarity Coefficient (DSC). Obtained quantitative results have shown a good approximation of the segmented prostate

    COMPREHENSIVE AUTOENCODER FOR PROSTATE RECOGNITION ON MR IMAGES

    Get PDF

    A coarse-to-fine approach to prostate boundary segmentation in ultrasound images

    Get PDF
    BACKGROUND: In this paper a novel method for prostate segmentation in transrectal ultrasound images is presented. METHODS: A segmentation procedure consisting of four main stages is proposed. In the first stage, a locally adaptive contrast enhancement method is used to generate a well-contrasted image. In the second stage, this enhanced image is thresholded to extract an area containing the prostate (or large portions of it). Morphological operators are then applied to obtain a point inside of this area. Afterwards, a Kalman estimator is employed to distinguish the boundary from irrelevant parts (usually caused by shadow) and generate a coarsely segmented version of the prostate. In the third stage, dilation and erosion operators are applied to extract outer and inner boundaries from the coarsely estimated version. Consequently, fuzzy membership functions describing regional and gray-level information are employed to selectively enhance the contrast within the prostate region. In the last stage, the prostate boundary is extracted using strong edges obtained from selectively enhanced image and information from the vicinity of the coarse estimation. RESULTS: A total average similarity of 98.76%(± 0.68) with gold standards was achieved. CONCLUSION: The proposed approach represents a robust and accurate approach to prostate segmentation

    A novel approach for automatic segmentation of prostate and its lesion regions on magnetic resonance imaging

    Get PDF
    ObjectiveTo develop an accurate and automatic segmentation model based on convolution neural network to segment the prostate and its lesion regions.MethodsOf all 180 subjects, 122 healthy individuals and 58 patients with prostate cancer were included. For each subject, all slices of the prostate were comprised in the DWIs. A novel DCNN is proposed to automatically segment the prostate and its lesion regions. This model is inspired by the U-Net model with the encoding-decoding path as the backbone, importing dense block, attention mechanism techniques, and group norm-Atrous Spatial Pyramidal Pooling. Data augmentation was used to avoid overfitting in training. In the experimental phase, the data set was randomly divided into a training (70%), testing set (30%). four-fold cross-validation methods were used to obtain results for each metric.ResultsThe proposed model achieved in terms of Iou, Dice score, accuracy, sensitivity, 95% Hausdorff Distance, 86.82%,93.90%, 94.11%, 93.8%,7.84 for the prostate, 79.2%, 89.51%, 88.43%,89.31%,8.39 for lesion region in segmentation. Compared to the state-of-the-art models, FCN, U-Net, U-Net++, and ResU-Net, the segmentation model achieved more promising results.ConclusionThe proposed model yielded excellent performance in accurate and automatic segmentation of the prostate and lesion regions, revealing that the novel deep convolutional neural network could be used in clinical disease treatment and diagnosis

    AUTOMATIC PROSTATE BOUNDARY SEGMENTATION FOR 2D ULTRASOUND IMAGES

    Get PDF
    Segmenting the prostate boundary is essential in determining the dose plan needed for a successful brachytherapy procedure - an effective and a commonly used treatment for prostate cancer. However, manual segmentation is time consuming and can introduce inter and intra-operator variability. In this thesis, we describe an algorithm for segmenting the prostate from two dimensional ultrasound (2D US) images, which can be either semi-automatic, requiring only one user input, or fully-automatic, with some assumptions of image acquisition. Segmentation begins with the user inputting the approximate centre of the prostate for the semi-automatic version of the algorithm, or with assuming the centre of the prostate to be at the centre of the image for the fully-automatic version. The image is then filtered with a Laplacian of Gaussian (LoG) filter that identifies prostate edge candidates. The next step removes most of the false edges (not on the prostate boundary), and keeps as many true edges (on the boundary) as possible. Then, domain knowledge is used to remove any prostate boundary candidates that are probably false edge pixels. The image is then scanned along radiai lines and only the first-detected boundary candidates are kept. The final step involves the removal of some remaining false edge pixels by fitting a polynomial to the image points, removing the point with the maximum distance from the fit, and repeating the process until this maximum distance is less than 4mm. The resulting candidate edges form an initial model that is then deformed using the Discrete Dynamic Contour (DDC) model to obtain a closed contour of the prostate boundary. The accuracy of the prostate boundary that was produced by both versions of the algorithm was evaluated by comparing it to a contour that was manually iii outlined by an expert radiologist. We segmented 51 2D Transrectal ultrasound (TRUS) prostate images using both versions of the algorithm and found that the mean distance between the contours produced by our algorithm and the manual outlines was 0.7 ± 0.3 mm for the semi-automatic version and 0.8 ± 0.4 mm for the fully- automatic version. The accuracy and the sensitivity of the algorithm to area measurements were (94.3 ± 4.2)% and (92.1 ± 3.6)% for the semi-automatic version, respectively and (92.9 ± 6.9)% and (91.2 ± 5.1)% for the fully-automatic version, respectively

    Validation Strategies Supporting Clinical Integration of Prostate Segmentation Algorithms for Magnetic Resonance Imaging

    Get PDF
    Segmentation of the prostate in medical images is useful for prostate cancer diagnosis and therapy guidance. However, manual segmentation of the prostate is laborious and time-consuming, with inter-observer variability. The focus of this thesis was on accuracy, reproducibility and procedure time measurement for prostate segmentation on T2-weighted endorectal magnetic resonance imaging, and assessment of the potential of a computer-assisted segmentation technique to be translated to clinical practice for prostate cancer management. We collected an image data set from prostate cancer patients with manually-delineated prostate borders by one observer on all the images and by two other observers on a subset of images. We used a complementary set of error metrics to measure the different types of observed segmentation errors. We compared expert manual segmentation as well as semi-automatic and automatic segmentation approaches before and after manual editing by expert physicians. We recorded the time needed for user interaction to initialize the semi-automatic algorithm, algorithm execution, and manual editing as necessary. Comparing to manual segmentation, the measured errors for the algorithms compared favourably with observed differences between manual segmentations. The measured average editing times for the computer-assisted segmentation were lower than fully manual segmentation time, and the algorithms reduced the inter-observer variability as compared to manual segmentation. The accuracy of the computer-assisted approaches was near to or within the range of observed variability in manual segmentation. The recorded procedure time for prostate segmentation was reduced using computer-assisted segmentation followed by manual editing, compared to the time required for fully manual segmentation

    Deep Networks Based Energy Models for Object Recognition from Multimodality Images

    Get PDF
    Object recognition has been extensively investigated in computer vision area, since it is a fundamental and essential technique in many important applications, such as robotics, auto-driving, automated manufacturing, and security surveillance. According to the selection criteria, object recognition mechanisms can be broadly categorized into object proposal and classification, eye fixation prediction and saliency object detection. Object proposal tends to capture all potential objects from natural images, and then classify them into predefined groups for image description and interpretation. For a given natural image, human perception is normally attracted to the most visually important regions/objects. Therefore, eye fixation prediction attempts to localize some interesting points or small regions according to human visual system (HVS). Based on these interesting points and small regions, saliency object detection algorithms propagate the important extracted information to achieve a refined segmentation of the whole salient objects. In addition to natural images, object recognition also plays a critical role in clinical practice. The informative insights of anatomy and function of human body obtained from multimodality biomedical images such as magnetic resonance imaging (MRI), transrectal ultrasound (TRUS), computed tomography (CT) and positron emission tomography (PET) facilitate the precision medicine. Automated object recognition from biomedical images empowers the non-invasive diagnosis and treatments via automated tissue segmentation, tumor detection and cancer staging. The conventional recognition methods normally utilize handcrafted features (such as oriented gradients, curvature, Haar features, Haralick texture features, Laws energy features, etc.) depending on the image modalities and object characteristics. It is challenging to have a general model for object recognition. Superior to handcrafted features, deep neural networks (DNN) can extract self-adaptive features corresponding with specific task, hence can be employed for general object recognition models. These DNN-features are adjusted semantically and cognitively by over tens of millions parameters corresponding to the mechanism of human brain, therefore leads to more accurate and robust results. Motivated by it, in this thesis, we proposed DNN-based energy models to recognize object on multimodality images. For the aim of object recognition, the major contributions of this thesis can be summarized below: 1. We firstly proposed a new comprehensive autoencoder model to recognize the position and shape of prostate from magnetic resonance images. Different from the most autoencoder-based methods, we focused on positive samples to train the model in which the extracted features all come from prostate. After that, an image energy minimization scheme was applied to further improve the recognition accuracy. The proposed model was compared with three classic classifiers (i.e. support vector machine with radial basis function kernel, random forest, and naive Bayes), and demonstrated significant superiority for prostate recognition on magnetic resonance images. We further extended the proposed autoencoder model for saliency object detection on natural images, and the experimental validation proved the accurate and robust saliency object detection results of our model. 2. A general multi-contexts combined deep neural networks (MCDN) model was then proposed for object recognition from natural images and biomedical images. Under one uniform framework, our model was performed in multi-scale manner. Our model was applied for saliency object detection from natural images as well as prostate recognition from magnetic resonance images. Our experimental validation demonstrated that the proposed model was competitive to current state-of-the-art methods. 3. We designed a novel saliency image energy to finely segment salient objects on basis of our MCDN model. The region priors were taken into account in the energy function to avoid trivial errors. Our method outperformed state-of-the-art algorithms on five benchmarking datasets. In the experiments, we also demonstrated that our proposed saliency image energy can boost the results of other conventional saliency detection methods
    corecore