93 research outputs found

    Accurate Segmentation of CT Pelvic Organs via Incremental Cascade Learning and Regression-based Deformable Models

    Get PDF
    Accurate segmentation of male pelvic organs from computed tomography (CT) images is important in image guided radiotherapy (IGRT) of prostate cancer. The efficacy of radiation treatment highly depends on the segmentation accuracy of planning and treatment CT images. Clinically manual delineation is still generally performed in most hospitals. However, it is time consuming and suffers large inter-operator variability due to the low tissue contrast of CT images. To reduce the manual efforts and improve the consistency of segmentation, it is desirable to develop an automatic method for rapid and accurate segmentation of pelvic organs from planning and treatment CT images. This dissertation marries machine learning and medical image analysis for addressing two fundamental yet challenging segmentation problems in image guided radiotherapy of prostate cancer. Planning-CT Segmentation. Deformable models are popular methods for planning-CT segmentation. However, they are well known to be sensitive to initialization and ineffective in segmenting organs with complex shapes. To address these limitations, this dissertation investigates a novel deformable model named regression-based deformable model (RDM). Instead of locally deforming the shape model, in RDM the deformation at each model point is explicitly estimated from local image appearance and used to guide deformable segmentation. As the estimated deformation can be long-distance and is spatially adaptive to each model point, RDM is insensitive to initialization and more flexible than conventional deformable models. These properties render it very suitable for CT pelvic organ segmentation, where initialization is difficult to get and organs may have complex shapes. Treatment-CT Segmentation. Most existing methods have two limitations when they are applied to treatment-CT segmentation. First, they have a limited accuracy because they overlook the availability of patient-specific data in the IGRT workflow. Second, they are time consuming and may take minutes or even longer for segmentation. To improve both accuracy and efficiency, this dissertation combines incremental learning with anatomical landmark detection for fast localization of the prostate in treatment CT images. Specifically, cascade classifiers are learned from a population to automatically detect several anatomical landmarks in the image. Based on these landmarks, the prostate is quickly localized by aligning and then fusing previous segmented prostate shapes of the same patient. To improve the performance of landmark detection, a novel learning scheme named "incremental learning with selective memory" is proposed to personalize the population-based cascade classifiers to the patient under treatment. Extensive experiments on a large dataset show that the proposed method achieves comparable accuracy to the state of the art methods while substantially reducing runtime from minutes to just 4 seconds.Doctor of Philosoph

    Deformable MR Prostate Segmentation via Deep Feature Learning and Sparse Patch Matching

    Get PDF
    Automatic and reliable segmentation of the prostate is an important but difficult task for various clinical applications such as prostate cancer radiotherapy. The main challenges for accurate MR prostate localization lie in two aspects: (1) inhomogeneous and inconsistent appearance around prostate boundary, and (2) the large shape variation across different patients. To tackle these two problems, we propose a new deformable MR prostate segmentation method by unifying deep feature learning with the sparse patch matching. First, instead of directly using handcrafted features, we propose to learn the latent feature representation from prostate MR images by the stacked sparse auto-encoder (SSAE). Since the deep learning algorithm learns the feature hierarchy from the data, the learned features are often more concise and effective than the handcrafted features in describing the underlying data. To improve the discriminability of learned features, we further refine the feature representation in a supervised fashion. Second, based on the learned features, a sparse patch matching method is proposed to infer a prostate likelihood map by transferring the prostate labels from multiple atlases to the new prostate MR image. Finally, a deformable segmentation is used to integrate a sparse shape model with the prostate likelihood map for achieving the final segmentation. The proposed method has been extensively evaluated on the dataset that contains 66 T2-wighted prostate MR images. Experimental results show that the deep-learned features are more effective than the handcrafted features in guiding MR prostate segmentation. Moreover, our method shows superior performance than other state-of-the-art segmentation methods

    Multiatlas-Based Segmentation Editing With Interaction-Guided Patch Selection and Label Fusion

    Get PDF
    We propose a novel multi-atlas based segmentation method to address the segmentation editing scenario, where an incomplete segmentation is given along with a set of existing reference label images (used as atlases). Unlike previous multi-atlas based methods, which depend solely on appearance features, we incorporate interaction-guided constraints to find appropriate atlas label patches in the reference label set and derive their weights for label fusion. Specifically, user interactions provided on the erroneous parts are first divided into multiple local combinations. For each combination, the atlas label patches well-matched with both interactions and the previous segmentation are identified. Then, the segmentation is updated through the voxel-wise label fusion of selected atlas label patches with their weights derived from the distances of each underlying voxel to the interactions. Since the atlas label patches well-matched with different local combinations are used in the fusion step, our method can consider various local shape variations during the segmentation update, even with only limited atlas label images and user interactions. Besides, since our method does not depend on either image appearance or sophisticated learning steps, it can be easily applied to general editing problems. To demonstrate the generality of our method, we apply it to editing segmentations of CT prostate, CT brainstem, and MR hippocampus, respectively. Experimental results show that our method outperforms existing editing methods in all three data sets

    Collaborative regression-based anatomical landmark detection

    Get PDF
    Anatomical landmark detection plays an important role in medical image analysis, e.g., for registration, segmentation and quantitative analysis. Among various existing methods for landmark detection, regression-based methods recently have drawn much attention due to robustness and efficiency. In such methods, landmarks are localized through voting from all image voxels, which is completely different from classification-based methods that use voxel-wise classification to detect landmarks. Despite robustness, the accuracy of regression-based landmark detection methods is often limited due to 1) inclusion of uninformative image voxels in the voting procedure, and 2) lack of effective ways to incorporate inter-landmark spatial dependency into the detection step. In this paper, we propose a collaborative landmark detection framework to address these limitations. The concept of collaboration is reflected in two aspects. 1) Multi-resolution collaboration. A multi-resolution strategy is proposed to hierarchically localize landmarks by gradually excluding uninformative votes from faraway voxels. Moreover, for the informative voxels near the landmark, a spherical sampling strategy is also designed in the training stage to improve their prediction accuracy. 2) Inter-landmark collaboration. A confidence-based landmark detection strategy is proposed to improve the detection accuracy of “difficult-to-detect” landmarks by using spatial guidance from “easy-to-detect” landmarks. To evaluate our method, we conducted experiments extensively on three datasets for detecting prostate landmarks and head & neck landmarks in computed tomography (CT) images, and also dental landmarks in cone beam computed tomography (CBCT) images. The results show the effectiveness of our collaborative landmark detection framework in improving landmark detection accuracy, compared to other state-of-the-art methods

    Detecting Anatomical Landmarks for Fast Alzheimer’s Disease Diagnosis

    Get PDF
    Structural magnetic resonance imaging (MRI) is a very popular and effective technique used to diagnose Alzheimer’s disease (AD). The success of computer-aided diagnosis methods using structural MRI data is largely dependent on the two time-consuming steps: 1) nonlinear registration across subjects, and 2) brain tissue segmentation. To overcome this limitation, we propose a landmark-based feature extraction method that does not require nonlinear registration and tissue segmentation. In the training stage, in order to distinguish AD subjects from healthy controls (HCs), group comparisons, based on local morphological features, are first performed to identify brain regions that have significant group differences. In general, the centers of the identified regions become landmark locations (or AD landmarks for short) capable of differentiating AD subjects from HCs. In the testing stage, using the learned AD landmarks, the corresponding landmarks are detected in a testing image using an efficient technique based on a shape-constrained regression-forest algorithm. To improve detection accuracy, an additional set of salient and consistent landmarks are also identified to guide the AD landmark detection. Based on the identified AD landmarks, morphological features are extracted to train a support vector machine (SVM) classifier that is capable of predicting the AD condition. In the experiments, our method is evaluated on landmark detection and AD classification sequentially. Specifically, the landmark detection error (manually annotated versus automatically detected) of the proposed landmark detector is 2.41mm, and our landmark-based AD classification accuracy is 83.7%. Lastly, the AD classification performance of our method is comparable to, or even better than, that achieved by existing region-based and voxel-based methods, while the proposed method is approximately 50 times faster

    A learning-based CT prostate segmentation method via joint transductive feature selection and regression

    Get PDF
    In1 recent years, there has been a great interest in prostate segmentation, which is a important and challenging task for CT image guided radiotherapy. In this paper, a learning-based segmentation method via joint transductive feature selection and transductive regression is presented, which incorporates the physician’s simple manual specification (only taking a few seconds), to aid accurate segmentation, especially for the case with large irregular prostate motion. More specifically, for the current treatment image, experienced physician is first allowed to manually assign the labels for a small subset of prostate and non-prostate voxels, especially in the first and last slices of the prostate regions. Then, the proposed method follows the two step: in prostate-likelihood estimation step, two novel algorithms: tLasso and wLapRLS, will be sequentially employed for transductive feature selection and transductive regression, respectively, aiming to generate the prostate-likelihood map. In multi-atlases based label fusion step, the final segmentation result will be obtained according to the corresponding prostate-likelihood map and the previous images of the same patient. The proposed method has been substantially evaluated on a real prostate CT dataset including 24 patients with 330 CT images, and compared with several state-of-the-art methods. Experimental results show that the proposed method outperforms the state-of-the-arts in terms of higher Dice ratio, higher true positive fraction, and lower centroid distances. Also, the results demonstrate that simple manual specification can help improve the segmentation performance, which is clinically feasible in real practice

    7T-guided super-resolution of 3T MRI

    Get PDF
    High-resolution MR images can depict rich details of brain anatomical structures and show subtle changes in longitudinal data. 7T MRI scanners can acquire MR images with higher resolution and better tissue contrast than the routine 3T MRI scanners. However, 7T MRI scanners are currently more expensive and less available in clinical and research centers. To this end, we propose a method to generate super-resolution 3T MRI that resembles 7T MRI, which is called as 7T-like MR image in this paper

    Alzheimer's Disease Diagnosis Using Landmark-Based Features From Longitudinal Structural MR Images

    Get PDF
    Structural magnetic resonance imaging (MRI) has been proven to be an effective tool for Alzheimer’s disease (AD) diagnosis. While conventional MRI-based AD diagnosis typically uses images acquired at a single time point, a longitudinal study is more sensitive in detecting early pathological changes of AD, making it more favorable for accurate diagnosis. In general, there are two challenges faced in MRI-based diagnosis. First, extracting features from structural MR images requires time-consuming nonlinear registration and tissue segmentation, whereas the longitudinal study with involvement of more scans further exacerbates the computational costs. Moreover, the inconsistent longitudinal scans (i.e., different scanning time points and also the total number of scans) hinder extraction of unified feature representations in longitudinal studies. In this paper, we propose a landmark-based feature extraction method for AD diagnosis using longitudinal structural MR images, which does not require nonlinear registration or tissue segmentation in the application stage and is also robust to inconsistencies among longitudinal scans. Specifically, 1) the discriminative landmarks are first automatically discovered from the whole brain using training images, and then efficiently localized using a fast landmark detection method for testing images, without the involvement of any nonlinear registration and tissue segmentation; 2) high-level statistical spatial features and contextual longitudinal features are further extracted based on those detected landmarks, which can characterize spatial structural abnormalities and longitudinal landmark variations. Using these spatial and longitudinal features, a linear support vector machine (SVM) is finally adopted to distinguish AD subjects or mild cognitive impairment (MCI) subjects from healthy controls (HCs). Experimental results on the ADNI database demonstrate the superior performance and efficiency of the proposed method, with classification accuracies of 88.30% for AD vs. HC and 79.02% for MCI vs. HC, respectively

    Region-Adaptive Deformable Registration of CT/MRI Pelvic Images via Learning-Based Image Synthesis

    Get PDF
    Registration of pelvic CT and MRI is highly desired as it can facilitate effective fusion of two modalities for prostate cancer radiation therapy, i.e., using CT for dose planning and MRI for accurate organ delineation. However, due to the large inter-modality appearance gaps and the high shape/appearance variations of pelvic organs, the pelvic CT/MRI registration is highly challenging. In this paper, we propose a region-adaptive deformable registration method for multi-modal pelvic image registration. Specifically, to handle the large appearance gaps, we first perform both CT-to-MRI and MRI-to-CT image synthesis by multi-target regression forest (MT-RF). Then, to use the complementary anatomical information in the two modalities for steering the registration, we select key points automatically from both modalities and use them together for guiding correspondence detection in the region-adaptive fashion. That is, we mainly use CT to establish correspondences for bone regions, and use MRI to establish correspondences for soft tissue regions. The number of key points is increased gradually during the registration, to hierarchically guide the symmetric estimation of the deformation fields. Experiments for both intra-subject and inter-subject deformable registration show improved performances compared with state-of-the-art multi-modal registration methods, which demonstrate the potentials of our method to be applied for the routine prostate cancer radiation therapy

    STRAINet: Spatially Varying sTochastic Residual AdversarIal Networks for MRI Pelvic Organ Segmentation

    Get PDF
    Accurate segmentation of pelvic organs is important for prostate radiation therapy. Modern radiation therapy starts to use a magnetic resonance image (MRI) as an alternative to computed tomography image because of its superior soft tissue contrast and also free of risk from radiation exposure. However, segmentation of pelvic organs from MRI is a challenging problem due to inconsistent organ appearance across patients and also large intrapatient anatomical variations across treatment days. To address such challenges, we propose a novel deep network architecture, called "Spatially varying sTochastic Residual AdversarIal Network" (STRAINet), to delineate pelvic organs from MRI in an end-to-end fashion. Compared to the traditional fully convolutional networks (FCN), the proposed architecture has two main contributions: 1) inspired by the recent success of residual learning, we propose an evolutionary version of the residual unit, i.e., stochastic residual unit, and use it to the plain convolutional layers in the FCN. We further propose long-range stochastic residual connections to pass features from shallow layers to deep layers; and 2) we propose to integrate three previously proposed network strategies to form a new network for better medical image segmentation: a) we apply dilated convolution in the smallest resolution feature maps, so that we can gain a larger receptive field without overly losing spatial information; b) we propose a spatially varying convolutional layer that adapts convolutional filters to different regions of interest; and c) an adversarial network is proposed to further correct the segmented organ structures. Finally, STRAINet is used to iteratively refine the segmentation probability maps in an autocontext manner. Experimental results show that our STRAINet achieved the state-of-the-art segmentation accuracy. Further analysis also indicates that our proposed network components contribute most to the performance
    • …
    corecore