1,639 research outputs found
A Statistical Modeling Approach to Computer-Aided Quantification of Dental Biofilm
Biofilm is a formation of microbial material on tooth substrata. Several
methods to quantify dental biofilm coverage have recently been reported in the
literature, but at best they provide a semi-automated approach to
quantification with significant input from a human grader that comes with the
graders bias of what are foreground, background, biofilm, and tooth.
Additionally, human assessment indices limit the resolution of the
quantification scale; most commercial scales use five levels of quantification
for biofilm coverage (0%, 25%, 50%, 75%, and 100%). On the other hand, current
state-of-the-art techniques in automatic plaque quantification fail to make
their way into practical applications owing to their inability to incorporate
human input to handle misclassifications. This paper proposes a new interactive
method for biofilm quantification in Quantitative light-induced fluorescence
(QLF) images of canine teeth that is independent of the perceptual bias of the
grader. The method partitions a QLF image into segments of uniform texture and
intensity called superpixels; every superpixel is statistically modeled as a
realization of a single 2D Gaussian Markov random field (GMRF) whose parameters
are estimated; the superpixel is then assigned to one of three classes
(background, biofilm, tooth substratum) based on the training set of data. The
quantification results show a high degree of consistency and precision. At the
same time, the proposed method gives pathologists full control to post-process
the automatic quantification by flipping misclassified superpixels to a
different state (background, tooth, biofilm) with a single click, providing
greater usability than simply marking the boundaries of biofilm and tooth as
done by current state-of-the-art methods.Comment: 10 pages, 7 figures, Journal of Biomedical and Health Informatics
2014. keywords: {Biomedical imaging;Calibration;Dentistry;Estimation;Image
segmentation;Manuals;Teeth},
http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6758338&isnumber=636350
A survey of exemplar-based texture synthesis
Exemplar-based texture synthesis is the process of generating, from an input
sample, new texture images of arbitrary size and which are perceptually
equivalent to the sample. The two main approaches are statistics-based methods
and patch re-arrangement methods. In the first class, a texture is
characterized by a statistical signature; then, a random sampling conditioned
to this signature produces genuinely different texture images. The second class
boils down to a clever "copy-paste" procedure, which stitches together large
regions of the sample. Hybrid methods try to combine ideas from both approaches
to avoid their hurdles. The recent approaches using convolutional neural
networks fit to this classification, some being statistical and others
performing patch re-arrangement in the feature space. They produce impressive
synthesis on various kinds of textures. Nevertheless, we found that most real
textures are organized at multiple scales, with global structures revealed at
coarse scales and highly varying details at finer ones. Thus, when confronted
with large natural images of textures the results of state-of-the-art methods
degrade rapidly, and the problem of modeling them remains wide open.Comment: v2: Added comments and typos fixes. New section added to describe
FRAME. New method presented: CNNMR
Texture representation using wavelet filterbanks
Texture analysis is a fundamental issue in image analysis and computer vision. While considerable research has been carried out in the texture analysis domain, problems relating to texture representation have been addressed only partially and active research is continuing. The vast majority of algorithms for texture analysis make either an explicit or implicit assumption that all images are captured under the same measurement conditions, such as orientation and illumination. These assumptions are often unrealistic in many practical applications;This dissertation addresses the viewpoint-invariance problem in texture classification by introducing a rotated wavelet filterbank. The proposed filterbank, in conjunction with a standard wavelet filterbank, provides better freedom of orientation tuning for texture analysis. This allows one to obtain texture features that are invariant with respect to texture rotation and linear grayscale transformation. In this study, energy estimates of channel outputs that are commonly used as texture features in texture classification are transformed into a set of viewpoint-invariant features. Texture properties that have a physical connection with human perception are taken into account in the transformation of the energy estimates;Experiments using natural texture image sets that have been used for evaluating other successful approaches were conducted in order to facilitate comparison. We observe that the proposed feature set outperformed methods proposed by others in the past. A channel selection method is also proposed to minimize the computational complexity and improve performance in a texture segmentation algorithm. Results demonstrating the validity of the approach are presented using experimental ultrasound tendon images
Data-driven image color theme enhancement
Proceedings of the 3rd ACM SIGGRAPH Asia 2010, Seoul, South Korea, 15-18 December 2010It is often important for designers and photographers to convey or enhance desired color themes in their work. A color theme is typically defined as a template of colors and an associated verbal description. This paper presents a data-driven method for enhancing a desired color theme in an image. We formulate our goal as a unified optimization that simultaneously considers a desired color theme, texture-color relationships as well as automatic or user-specified color constraints. Quantifying the difference between an image and a color theme is made possible by color mood spaces and a generalization of an additivity relationship for two-color combinations. We incorporate prior knowledge, such as texture-color relationships, extracted from a database of photographs to maintain a natural look of the edited images. Experiments and a user study have confirmed the effectiveness of our method. © 2010 ACM.postprin
- …