1,384 research outputs found

    Localizing Region-Based Active Contours

    Get PDF
    ©2008 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or distribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE. This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.DOI: 10.1109/TIP.2008.2004611In this paper, we propose a natural framework that allows any region-based segmentation energy to be re-formulated in a local way. We consider local rather than global image statistics and evolve a contour based on local information. Localized contours are capable of segmenting objects with heterogeneous feature profiles that would be difficult to capture correctly using a standard global method. The presented technique is versatile enough to be used with any global region-based active contour energy and instill in it the benefits of localization. We describe this framework and demonstrate the localization of three well-known energies in order to illustrate how our framework can be applied to any energy. We then compare each localized energy to its global counterpart to show the improvements that can be achieved. Next, an in-depth study of the behaviors of these energies in response to the degree of localization is given. Finally, we show results on challenging images to illustrate the robust and accurate segmentations that are possible with this new class of active contour models

    Computational Modeling for Abnormal Brain Tissue Segmentation, Brain Tumor Tracking, and Grading

    Get PDF
    This dissertation proposes novel texture feature-based computational models for quantitative analysis of abnormal tissues in two neurological disorders: brain tumor and stroke. Brain tumors are the cells with uncontrolled growth in the brain tissues and one of the major causes of death due to cancer. On the other hand, brain strokes occur due to the sudden interruption of the blood supply which damages the normal brain tissues and frequently causes death or persistent disability. Clinical management of these brain tumors and stroke lesions critically depends on robust quantitative analysis using different imaging modalities including Magnetic Resonance (MR) and Digital Pathology (DP) images. Due to uncontrolled growth and infiltration into the surrounding tissues, the tumor regions appear with a significant texture variation in the static MRI volume and also in the longitudinal imaging study. Consequently, this study developed computational models using novel texture features to segment abnormal brain tissues (tumor, and stroke lesions), tracking the change of tumor volume in longitudinal images, and tumor grading in MR images. Manual delineation and analysis of these abnormal tissues in large scale is tedious, error-prone, and often suffers from inter-observer variability. Therefore, efficient computational models for robust segmentation of different abnormal tissues is required to support the diagnosis and analysis processes. In this study, brain tissues are characterized with novel computational modeling of multi-fractal texture features for multi-class brain tumor tissue segmentation (BTS) and extend the method for ischemic stroke lesions in MRI. The robustness of the proposed segmentation methods is evaluated using a huge amount of private and public domain clinical data that offers competitive performance when compared with that of the state-of-the-art methods. Further, I analyze the dynamic texture behavior of tumor volume in longitudinal imaging and develop post-processing frame-work using three-dimensional (3D) texture features. These post-processing methods are shown to reduce the false positives in the BTS results and improve the overall segmentation result in longitudinal imaging. Furthermore, using this improved segmentation results the change of tumor volume has been quantified in three types such as stable, progress, and shrinkage as observed by the volumetric changes of different tumor tissues in longitudinal images. This study also investigates a novel non-invasive glioma grading, for the first time in literature, that uses structural MRI only. Such non-invasive glioma grading may be useful before an invasive biopsy is recommended. This study further developed an automatic glioma grading scheme using the invasive cell nuclei morphology in DP images for cross-validation with the same patients. In summary, the texture-based computational models proposed in this study are expected to facilitate the clinical management of patients with the brain tumors and strokes by automating large scale imaging data analysis, reducing human error, inter-observer variability, and producing repeatable brain tumor quantitation and grading

    Ball-Scale Based Hierarchical Multi-Object Recognition in 3D Medical Images

    Full text link
    This paper investigates, using prior shape models and the concept of ball scale (b-scale), ways of automatically recognizing objects in 3D images without performing elaborate searches or optimization. That is, the goal is to place the model in a single shot close to the right pose (position, orientation, and scale) in a given image so that the model boundaries fall in the close vicinity of object boundaries in the image. This is achieved via the following set of key ideas: (a) A semi-automatic way of constructing a multi-object shape model assembly. (b) A novel strategy of encoding, via b-scale, the pose relationship between objects in the training images and their intensity patterns captured in b-scale images. (c) A hierarchical mechanism of positioning the model, in a one-shot way, in a given image from a knowledge of the learnt pose relationship and the b-scale image of the given image to be segmented. The evaluation results on a set of 20 routine clinical abdominal female and male CT data sets indicate the following: (1) Incorporating a large number of objects improves the recognition accuracy dramatically. (2) The recognition algorithm can be thought as a hierarchical framework such that quick replacement of the model assembly is defined as coarse recognition and delineation itself is known as finest recognition. (3) Scale yields useful information about the relationship between the model assembly and any given image such that the recognition results in a placement of the model close to the actual pose without doing any elaborate searches or optimization. (4) Effective object recognition can make delineation most accurate.Comment: This paper was published and presented in SPIE Medical Imaging 201

    Segmentation of optic disc in retinal images for glaucoma diagnosis by saliency level set with enhanced active contour model

    Get PDF
    Glaucoma is an ophthalmic disease which is among the chief causes of visual impairment across the globe. The clarity of the optic disc (OD) is crucial for recognizing glaucoma. Since existing methods are unable to successfully integrate multi-view information derived from shape and appearance to precisely explain OD for segmentation, this paper proposes a saliency-based level set with an enhanced active contour method (SL-EACM), a modified locally statistical active contour model, and entropy-based optical disc localization. The significant contributions are that i) the SL-EACM is introduced to address the often noticed problem of intensity inhomogeneity brought on by defects in imaging equipment or fluctuations in lighting; ii) to prevent the integrity of the OD structures from being compromised by pathological alterations and artery blockage, local image probability data is included from a multi-dimensional feature space around the region of interest in the model; and iii) the model incorporates prior shape information into the technique, for enhancing the accuracy in identifying the OD structures from surrounding regions. Public databases such as CHASE_DB, DRIONS-DB, and Drishti-GS are used to evaluate the proposed model. The findings from numerous trials demonstrate that the proposed model outperforms state-of-the-art approaches in terms of qualitative and quantitative outcomes

    Segmentation of Infant Brain Using Nonnegative Matrix Factorization

    Get PDF
    This study develops an atlas-based automated framework for segmenting infants\u27 brains from magnetic resonance imaging (MRI). For the accurate segmentation of different structures of an infant\u27s brain at the isointense age (6-12 months), our framework integrates features of diffusion tensor imaging (DTI) (e.g., the fractional anisotropy (FA)). A brain diffusion tensor (DT) image and its region map are considered samples of a Markov-Gibbs random field (MGRF) that jointly models visual appearance, shape, and spatial homogeneity of a goal structure. The visual appearance is modeled with an empirical distribution of the probability of the DTI features, fused by their nonnegative matrix factorization (NMF) and allocation to data clusters. Projecting an initial high-dimensional feature space onto a low-dimensional space of the significant fused features with the NMF allows for better separation of the goal structure and its background. The cluster centers in the latter space are determined at the training stage by the K-means clustering. In order to adapt to large infant brain inhomogeneities and segment the brain images more accurately, appearance descriptors of both the first-order and second-order are taken into account in the fused NMF feature space. Additionally, a second-order MGRF model is used to describe the appearance based on the voxel intensities and their pairwise spatial dependencies. An adaptive shape prior that is spatially variant is constructed from a training set of co-aligned images, forming an atlas database. Moreover, the spatial homogeneity of the shape is described with a spatially uniform 3D MGRF of the second-order for region labels. In vivo experiments on nine infant datasets showed promising results in terms of the accuracy, which was computed using three metrics: the 95-percentile modified Hausdorff distance (MHD), the Dice similarity coefficient (DSC), and the absolute volume difference (AVD). Both the quantitative and visual assessments confirm that integrating the proposed NMF-fused DTI feature and intensity MGRF models of visual appearance, the adaptive shape prior, and the shape homogeneity MGRF model is promising in segmenting the infant brain DTI

    Learning to segment clustered amoeboid cells from brightfield microscopy via multi-task learning with adaptive weight selection

    Full text link
    Detecting and segmenting individual cells from microscopy images is critical to various life science applications. Traditional cell segmentation tools are often ill-suited for applications in brightfield microscopy due to poor contrast and intensity heterogeneity, and only a small subset are applicable to segment cells in a cluster. In this regard, we introduce a novel supervised technique for cell segmentation in a multi-task learning paradigm. A combination of a multi-task loss, based on the region and cell boundary detection, is employed for an improved prediction efficiency of the network. The learning problem is posed in a novel min-max framework which enables adaptive estimation of the hyper-parameters in an automatic fashion. The region and cell boundary predictions are combined via morphological operations and active contour model to segment individual cells. The proposed methodology is particularly suited to segment touching cells from brightfield microscopy images without manual interventions. Quantitatively, we observe an overall Dice score of 0.93 on the validation set, which is an improvement of over 15.9% on a recent unsupervised method, and outperforms the popular supervised U-net algorithm by at least 5.8%5.8\% on average
    corecore