13 research outputs found

    Lung Segmentation in 4D CT Volumes Based on Robust Active Shape Model Matching

    Get PDF
    Dynamic and longitudinal lung CT imaging produce 4D lung image data sets, enabling applications like radiation treatment planning or assessment of response to treatment of lung diseases. In this paper, we present a 4D lung segmentation method that mutually utilizes all individual CT volumes to derive segmentations for each CT data set. Our approach is based on a 3D robust active shape model and extends it to fully utilize 4D lung image data sets. This yields an initial segmentation for the 4D volume, which is then refined by using a 4D optimal surface finding algorithm. The approach was evaluated on a diverse set of 152 CT scans of normal and diseased lungs, consisting of total lung capacity and functional residual capacity scan pairs. In addition, a comparison to a 3D segmentation method and a registration based 4D lung segmentation approach was performed. The proposed 4D method obtained an average Dice coefficient of 0.9773±0.0254, which was statistically significantly better (p value ≪0.001) than the 3D method (0.9659±0.0517). Compared to the registration based 4D method, our method obtained better or similar performance, but was 58.6% faster. Also, the method can be easily expanded to process 4D CT data sets consisting of several volumes

    Robust Initialization of Active Shape Models for Lung Segmentation in CT Scans: A Feature-Based Atlas Approach

    Get PDF
    Model-based segmentation methods have the advantage of incorporating a priori shape information into the segmentation process but suffer from the drawback that the model must be initialized sufficiently close to the target. We propose a novel approach for initializing an active shape model (ASM) and apply it to 3D lung segmentation in CT scans. Our method constructs an atlas consisting of a set of representative lung features and an average lung shape. The ASM pose parameters are found by transforming the average lung shape based on an affine transform computed from matching features between the new image and representative lung features. Our evaluation on a diverse set of 190 images showed an average dice coefficient of 0.746 ± 0.068 for initialization and 0.974 ± 0.017 for subsequent segmentation, based on an independent reference standard. The mean absolute surface distance error was 0.948 ± 1.537 mm. The initialization as well as segmentation results showed a statistically significant improvement compared to four other approaches. The proposed initialization method can be generalized to other applications employing ASM-based segmentation

    Building a model for a 3D object classs in a low dimensional space for object detection

    No full text
    Modeling 3D object classes requires accounting for intra-class variations in an object's appearance under different viewpoints, scale and illumination conditions. Therefore, detecting instances of 3D object classes in the presence of background clutter is difficult. This thesis presents a novel approach to model generic 3D object classes and an algorithm to detect multiple instances of an object class in an arbitrary image. Motivated by the parts-based representation, the proposed approach divides the object into different spatial regions. Each spatial region is associated with an object part whose appearance is represented by a dense set of overlapping SIFT features. The distribution of these features is then described in a lower dimensional space using supervised Locally Linear Embedding. Each object part is essentially represented by a spatial cluster in the embedding space. For viewpoint invariance, the view-sphere comprising the 3D object is divided into a discrete number of view segments. Several spatial clusters represent the object in each view segment. This thesis provides a framework for representing these clusters in either single or multiple embedding spaces. A novel aspect of the proposed approach is that all object parts and the background class are represented in the same lower dimensional space. Thus the detection algorithm can explicitly label features in an image as belonging to an object part or background. Additionally, spatial relationships between object parts are established and employed during the detection stage to localize instances of the object class in a novel image. It is shown that detecting objects based on measuring spatial consistency between object parts is superior to a bag-of-words model that ignores all spatial information. Since generic object classes can be characterized by shape or appearance, this thesis has formulated a method to combine these attributes to enhance the object model. Class-specific local contour featurLa modélisation de classes d'objets 3D nécessite la prise en compte des variations à l'intérieur d'une même classe de l'apparence d'un objet sous différents points de vue, échelles et conditions d'illumination. Par conséquent, la détection de tels objets en présence d'un arrière-plan complexe est difficile. Cette thèse présente une approche nouvelle pour la modélisation générique de classes d'objets 3D, ainsi qu’un algorithme pouvant détecter plusieurs objets d'une classe dans une image.Motivé par la représentation par parties, l'approche proposée divise l'objet en différentes régions spatiales. Chaque région est associée à la partie d'un objet dont l'apparence est représentée par un ensemble dense de caractéristiques SIFT superposées. La distribution de ces caractéristiques est alors projetée dans un espace dimensionnel inférieur à l'aide d'un algorithme supervisé de Locally Linear Embedding. Chaque partie de l'objet est essentiellement représentée par un regroupement spatial dans l'espace englobant. Pour l'invariance de point de vue, la sphère contenant l'objet 3D est divisée en un nombre discret de segments. Plusieurs regroupements spatiaux représentent l'objet dans chaque segment. Cette thèse propose une manière de représenter ces regroupements aussi bien dans des espaces englobants uniques que multiples. Un aspect innovateur de l'approche proposée est que toutes les parties d'objets et les éléments d'arrière-plan sont représentés dans le même espace dimensionnel inférieur. Ainsi, l'algorithme de détection peut explicitement étiqueter des éléments d'une image comme appartenant à une partie d'objet ou à l'arrière-plan. De plus, les relations spatiales entre les parties d'un objet sont déterminées pendant l'étape de détection et employées pour localiser des éléments d'une classe d'objet dans une nouvelle image. Il est démontré que la détection d'objets basée sur la mesure de la consistan
    corecore