160 research outputs found

    A Statistical Modeling Approach to Computer-Aided Quantification of Dental Biofilm

    Full text link
    Biofilm is a formation of microbial material on tooth substrata. Several methods to quantify dental biofilm coverage have recently been reported in the literature, but at best they provide a semi-automated approach to quantification with significant input from a human grader that comes with the graders bias of what are foreground, background, biofilm, and tooth. Additionally, human assessment indices limit the resolution of the quantification scale; most commercial scales use five levels of quantification for biofilm coverage (0%, 25%, 50%, 75%, and 100%). On the other hand, current state-of-the-art techniques in automatic plaque quantification fail to make their way into practical applications owing to their inability to incorporate human input to handle misclassifications. This paper proposes a new interactive method for biofilm quantification in Quantitative light-induced fluorescence (QLF) images of canine teeth that is independent of the perceptual bias of the grader. The method partitions a QLF image into segments of uniform texture and intensity called superpixels; every superpixel is statistically modeled as a realization of a single 2D Gaussian Markov random field (GMRF) whose parameters are estimated; the superpixel is then assigned to one of three classes (background, biofilm, tooth substratum) based on the training set of data. The quantification results show a high degree of consistency and precision. At the same time, the proposed method gives pathologists full control to post-process the automatic quantification by flipping misclassified superpixels to a different state (background, tooth, biofilm) with a single click, providing greater usability than simply marking the boundaries of biofilm and tooth as done by current state-of-the-art methods.Comment: 10 pages, 7 figures, Journal of Biomedical and Health Informatics 2014. keywords: {Biomedical imaging;Calibration;Dentistry;Estimation;Image segmentation;Manuals;Teeth}, http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6758338&isnumber=636350

    Doctor of Philosophy in Computing

    Get PDF
    dissertationImage segmentation is the problem of partitioning an image into disjoint segments that are perceptually or semantically homogeneous. As one of the most fundamental computer vision problems, image segmentation is used as a primary step for high-level vision tasks, such as object recognition and image understanding, and has even wider applications in interdisciplinary areas, such as longitudinal brain image analysis. Hierarchical models have gained popularity as a key component in image segmentation frameworks. By imposing structures, a hierarchical model can efficiently utilize features from larger image regions and make optimal inference for final segmentation feasible. We develop a hierarchical merge tree (HMT) model for image segmentation. Motivated by the application in large-scale segmentation of neuronal structures in electron microscopy (EM) images, our model provides a compact representation of region merging hypotheses and utilizes higher order information for efficient segmentation inference. Taking advantage of supervised learning, our model is free from parameter tuning and outperforms previous state-of-the-art methods on both two-dimensional (2D) and three-dimensional EM image data sets without any change. We also extend HMT to the hierarchical merge forest (HMF) model. By identifying region correspondences, HMF utilizes inter-section information to correct intra-section errors and improves 2D EM segmentation accuracy. HMT is a generic segmentation model. We demonstrate this by applying it to natural image segmentation problems. We propose a constrained conditional model formulation with a globally optimal inference algorithm for HMT and an iterative merge tree sampling algorithm that significantly improves its performance. Experimental results show our approach achieves state-of-the-art accuracy for object-independent image segmentation. Finally, we propose a semi-supervised HMT (SSHMT) model to reduce the high demand for labeled data by supervised learning. We introduce a differentiable unsupervised loss term that enforces consistent boundary predictions and develop a Bayesian learning model that combines supervised and unsupervised information. We show that with a very small amount of labeled data, SSHMT consistently performs close to the supervised HMT with full labeled data sets and significantly outperforms HMT trained with the same labeled subsets

    Automatic segmentation of overlapping cervical smear cells based on local distinctive features and guided shape deformation

    Get PDF
    Automated segmentation of cells from cervical smears poses great challenge to biomedical image analysis because of the noisy and complex background, poor cytoplasmic contrast and the presence of fuzzy and overlapping cells. In this paper, we propose an automated segmentation method for the nucleus and cytoplasm in a cluster of cervical cells based on distinctive local features and guided sparse shape deformation. Our proposed approach is performed in two stages: segmentation of nuclei and cellular clusters, and segmentation of overlapping cytoplasm. In the rst stage, a set of local discriminative shape and appearance cues of image superpixels is incorporated and classi ed by the Support Vector Machine (SVM) to segment the image into nuclei, cellular clusters, and background. In the second stage, a robust shape deformation framework is proposed, based on Sparse Coding (SC) theory and guided by representative shape features, to construct the cytoplasmic shape of each overlapping cell. Then, the obtained shape is re ned by the Distance Regularized Level Set Evolution (DRLSE) model. We evaluated our approach using the ISBI 2014 challenge dataset, which has 135 synthetic cell images for a total of 810 cells. Our results show that our approach outperformed existing approaches in segmenting overlapping cells and obtaining accurate nuclear boundaries. Keywords: overlapping cervical smear cells, feature extraction, sparse coding, shape deformation, distance regularized level set

    A Multimodal Feature Selection Method for Remote Sensing Data Analysis Based on Double Graph Laplacian Diagonalization

    Get PDF
    When dealing with multivariate remotely sensed records collected by multiple sensors, an accurate selection of information at the data, feature, or decision level is instrumental in improving the scenes’ characterization. This will also enhance the system’s efficiency and provide more details on modeling the physical phenomena occurring on the Earth’s surface. In this article, we introduce a flexible and efficient method based on graph Laplacians for information selection at different levels of data fusion. The proposed approach combines data structure and information content to address the limitations of existing graph-Laplacian-based methods in dealing with heterogeneous datasets. Moreover, it adapts the selection to each homogenous area of the considered images according to their underlying properties. Experimental tests carried out on several multivariate remote sensing datasets show the consistency of the proposed approach

    Enhanced K-means Color Clustering Based on SLIC Superpixels Merging incorporated within the Entomology Software: AInsectID

    Get PDF
    Superpixel-based segmentation is an important pre-processing step for the simplification of image processing. The subjective nature behind the determination of optimal cluster numbers in segmentation algorithms can result in either underor over-segmentation burdens, depending on the image type. Insect wings, with their intricate color patterns, pose significant challenges for the accurate capture of color diversity in clustering algorithms, assuming a spherical and isotropic cluster distribution is used. This paper introduces a hybrid approach for color clustering in insect wings, integrating the Simple Linear Iterative Clustering (SLIC) method to generate the initial superpixels, and a DeltaE 2000 function the precisely discriminated merging of superpixels. Color differences between superpixels serve to measure homogeneity during the merging process. The proposed new algorithm demonstrates enhanced segmentation as it overcomes the issue of over-segmentation and under-segmentation, as evidenced by the results derived from the Boundary Recall, Rand index, Under-segmentation Error, and Bhattacharyya distance using ground truth data. The Silhouette score and Dunn Index are also used to quantitatively evaluate the efficacy of our new proposed clustering technique.<br/

    Uncertainty-Aware Organ Classification for Surgical Data Science Applications in Laparoscopy

    Get PDF
    Objective: Surgical data science is evolving into a research field that aims to observe everything occurring within and around the treatment process to provide situation-aware data-driven assistance. In the context of endoscopic video analysis, the accurate classification of organs in the field of view of the camera proffers a technical challenge. Herein, we propose a new approach to anatomical structure classification and image tagging that features an intrinsic measure of confidence to estimate its own performance with high reliability and which can be applied to both RGB and multispectral imaging (MI) data. Methods: Organ recognition is performed using a superpixel classification strategy based on textural and reflectance information. Classification confidence is estimated by analyzing the dispersion of class probabilities. Assessment of the proposed technology is performed through a comprehensive in vivo study with seven pigs. Results: When applied to image tagging, mean accuracy in our experiments increased from 65% (RGB) and 80% (MI) to 90% (RGB) and 96% (MI) with the confidence measure. Conclusion: Results showed that the confidence measure had a significant influence on the classification accuracy, and MI data are better suited for anatomical structure labeling than RGB data. Significance: This work significantly enhances the state of art in automatic labeling of endoscopic videos by introducing the use of the confidence metric, and by being the first study to use MI data for in vivo laparoscopic tissue classification. The data of our experiments will be released as the first in vivo MI dataset upon publication of this paper.Comment: 7 pages, 6 images, 2 table
    • …
    corecore