1,789 research outputs found

    Atlas-Based Prostate Segmentation Using an Hybrid Registration

    Full text link
    Purpose: This paper presents the preliminary results of a semi-automatic method for prostate segmentation of Magnetic Resonance Images (MRI) which aims to be incorporated in a navigation system for prostate brachytherapy. Methods: The method is based on the registration of an anatomical atlas computed from a population of 18 MRI exams onto a patient image. An hybrid registration framework which couples an intensity-based registration with a robust point-matching algorithm is used for both atlas building and atlas registration. Results: The method has been validated on the same dataset that the one used to construct the atlas using the "leave-one-out method". Results gives a mean error of 3.39 mm and a standard deviation of 1.95 mm with respect to expert segmentations. Conclusions: We think that this segmentation tool may be a very valuable help to the clinician for routine quantitative image exploitation.Comment: International Journal of Computer Assisted Radiology and Surgery (2008) 000-99

    Active Mean Fields for Probabilistic Image Segmentation: Connections with Chan-Vese and Rudin-Osher-Fatemi Models

    Get PDF
    Segmentation is a fundamental task for extracting semantically meaningful regions from an image. The goal of segmentation algorithms is to accurately assign object labels to each image location. However, image-noise, shortcomings of algorithms, and image ambiguities cause uncertainty in label assignment. Estimating the uncertainty in label assignment is important in multiple application domains, such as segmenting tumors from medical images for radiation treatment planning. One way to estimate these uncertainties is through the computation of posteriors of Bayesian models, which is computationally prohibitive for many practical applications. On the other hand, most computationally efficient methods fail to estimate label uncertainty. We therefore propose in this paper the Active Mean Fields (AMF) approach, a technique based on Bayesian modeling that uses a mean-field approximation to efficiently compute a segmentation and its corresponding uncertainty. Based on a variational formulation, the resulting convex model combines any label-likelihood measure with a prior on the length of the segmentation boundary. A specific implementation of that model is the Chan-Vese segmentation model (CV), in which the binary segmentation task is defined by a Gaussian likelihood and a prior regularizing the length of the segmentation boundary. Furthermore, the Euler-Lagrange equations derived from the AMF model are equivalent to those of the popular Rudin-Osher-Fatemi (ROF) model for image denoising. Solutions to the AMF model can thus be implemented by directly utilizing highly-efficient ROF solvers on log-likelihood ratio fields. We qualitatively assess the approach on synthetic data as well as on real natural and medical images. For a quantitative evaluation, we apply our approach to the icgbench dataset

    V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation

    Full text link
    Convolutional Neural Networks (CNNs) have been recently employed to solve problems from both the computer vision and medical image analysis fields. Despite their popularity, most approaches are only able to process 2D images while most medical data used in clinical practice consists of 3D volumes. In this work we propose an approach to 3D image segmentation based on a volumetric, fully convolutional, neural network. Our CNN is trained end-to-end on MRI volumes depicting prostate, and learns to predict segmentation for the whole volume at once. We introduce a novel objective function, that we optimise during training, based on Dice coefficient. In this way we can deal with situations where there is a strong imbalance between the number of foreground and background voxels. To cope with the limited number of annotated volumes available for training, we augment the data applying random non-linear transformations and histogram matching. We show in our experimental evaluation that our approach achieves good performances on challenging test data while requiring only a fraction of the processing time needed by other previous methods

    Validation Strategies Supporting Clinical Integration of Prostate Segmentation Algorithms for Magnetic Resonance Imaging

    Get PDF
    Segmentation of the prostate in medical images is useful for prostate cancer diagnosis and therapy guidance. However, manual segmentation of the prostate is laborious and time-consuming, with inter-observer variability. The focus of this thesis was on accuracy, reproducibility and procedure time measurement for prostate segmentation on T2-weighted endorectal magnetic resonance imaging, and assessment of the potential of a computer-assisted segmentation technique to be translated to clinical practice for prostate cancer management. We collected an image data set from prostate cancer patients with manually-delineated prostate borders by one observer on all the images and by two other observers on a subset of images. We used a complementary set of error metrics to measure the different types of observed segmentation errors. We compared expert manual segmentation as well as semi-automatic and automatic segmentation approaches before and after manual editing by expert physicians. We recorded the time needed for user interaction to initialize the semi-automatic algorithm, algorithm execution, and manual editing as necessary. Comparing to manual segmentation, the measured errors for the algorithms compared favourably with observed differences between manual segmentations. The measured average editing times for the computer-assisted segmentation were lower than fully manual segmentation time, and the algorithms reduced the inter-observer variability as compared to manual segmentation. The accuracy of the computer-assisted approaches was near to or within the range of observed variability in manual segmentation. The recorded procedure time for prostate segmentation was reduced using computer-assisted segmentation followed by manual editing, compared to the time required for fully manual segmentation

    Intraoperative Organ Motion Models with an Ensemble of Conditional Generative Adversarial Networks

    Get PDF
    In this paper, we describe how a patient-specific, ultrasound-probe-induced prostate motion model can be directly generated from a single preoperative MR image. Our motion model allows for sampling from the conditional distribution of dense displacement fields, is encoded by a generative neural network conditioned on a medical image, and accepts random noise as additional input. The generative network is trained by a minimax optimisation with a second discriminative neural network, tasked to distinguish generated samples from training motion data. In this work, we propose that 1) jointly optimising a third conditioning neural network that pre-processes the input image, can effectively extract patient-specific features for conditioning; and 2) combining multiple generative models trained separately with heuristically pre-disjointed training data sets can adequately mitigate the problem of mode collapse. Trained with diagnostic T2-weighted MR images from 143 real patients and 73,216 3D dense displacement fields from finite element simulations of intraoperative prostate motion due to transrectal ultrasound probe pressure, the proposed models produced physically-plausible patient-specific motion of prostate glands. The ability to capture biomechanically simulated motion was evaluated using two errors representing generalisability and specificity of the model. The median values, calculated from a 10-fold cross-validation, were 2.8+/-0.3 mm and 1.7+/-0.1 mm, respectively. We conclude that the introduced approach demonstrates the feasibility of applying state-of-the-art machine learning algorithms to generate organ motion models from patient images, and shows significant promise for future research.Comment: Accepted to MICCAI 201

    An Adaptive Algorithm to Identify Ambiguous Prostate Capsule Boundary Lines for Three-Dimensional Reconstruction and Quantitation

    Get PDF
    Currently there are few parameters that are used to compare the efficiency of different methods of cancerous prostate surgical removal. An accurate assessment of the percentage and depth of extra-capsular soft tissue removed with the prostate by the various surgical techniques can help surgeons determine the appropriateness of surgical approaches. Additionally, an objective assessment can allow a particular surgeon to compare individual performance against a standard. In order to facilitate 3D reconstruction and objective analysis and thus provide more accurate quantitation results when analyzing specimens, it is essential to automatically identify the capsule line that separates the prostate gland tissue from its extra-capsular tissue. However the prostate capsule is sometimes unrecognizable due to the naturally occurring intrusion of muscle and connective tissue into the prostate gland. At these regions where the capsule disappears, its contour can be arbitrarily reconstructed by drawing a continuing contour line based on the natural shape of the prostate gland. Presented here is a mathematical model that can be used in deciding the missing part of the capsule. This model approximates the missing parts of the capsule where it disappears to a standard shape by using a Generalized Hough Transform (GHT) approach to detect the prostate capsule. We also present an algorithm based on a least squares curve fitting technique that uses a prostate shape equation to merge previously detected capsule parts with the curve equation to produce an approximated curve that represents the prostate capsule. We have tested our algorithms using three shapes on 13 prostate slices that are cut at different locations from the apex and the results are promisin
    • …
    corecore