4 research outputs found

    Clustering and Shifting of Regional Appearance for Deformable Model Segmentation

    Get PDF
    Automated medical image segmentation is a challenging task that benefits from the use of effective image appearance models. An appearance model describes the grey-level intensity information relative to the object being segmented. Previous models that compare the target against a single template image or that assume a very small-scale correspondence fail to capture the variability seen in the target cases. In this dissertation I present novel appearance models to address these deficiencies, and I show their efficacy in segmentation via deformable models. The models developed here use clustering and shifting of the object-relative appearance to capture the true variability in appearance. They all learn their parameters from training sets of previously-segmented images. The first model uses clustering on cross-boundary intensity profiles in the training set to determine profile types, and then it builds a template of optimal types that reflects the various edge characteristics seen around the boundary. The second model uses clustering on local regional image descriptors to determine large-scale regions relative to the boundary. The method then partitions the object boundary according to region type and captures the intensity variability per region type. The third and fourth models allow shifting of the image model on the boundary to reflect knowledge of the variable regional conformations seen in training. I evaluate the appearance models by considering their efficacy in segmentation of the kidney, bladder, and prostate in abdominal and male pelvis CT. I compare the automatically generated segmentations using these models against expert manual segmentations of the target cases and against automatically generated segmentations using previous models

    Refinement of object-based segmentation

    Get PDF
    Automated object-based segmentation methods calculate the shape and pose of anatomical structures of interest. These methods require modeling both the geometry and object-relative image intensity patterns of target structures. Many object-based segmentation methods minimize a non-convex function and risk failure due to convergence to a local minimum. This dissertation presents three refinements to existing object-based segmentation methods. The first refinement mitigates the risk of local minima by initializing the segmentation closely to the correct answer. The initialization searches pose- and shape-spaces for the object that best matches user specified points on three designated image slices. Thus-initialized m-rep based segmentations of the bladder from CT are frequently better than segmentations reported elsewhere. The second refinement is a statistical test on object-relative intensity patterns that allows estimation of the local credibility of a segmentation. This test effectively identifies regions with local segmentation errors in m-rep based segmentations of the bladder and prostate from CT. The third refinement is a method for shape interpolation that is based on changes in the position and orientation of samples and that tends to be more shape-preserving than a competing linear method. This interpolation can be used with dynamic structures and to understand changes between segmentations of an object in atlas and target images. Together, these refinements aid in the segmentation of a dense collection of targets via a hybrid of object-based and atlas-based methods. The first refinement increases the probability of successful object-based segmentations of the subset of targets for which such methods are appropriate, the second increases the user's confidence that those object-based segmentations are correct, and the third is used to transfer the object-based segmentations to an atlas-based method that will be used to segment the remainder of the targets

    AUTOMATIC SEGMENTATION OF THE BLADDER USING DEFORMABLE MODELS

    No full text
    corecore