6,613 research outputs found

    GLCM-based chi-square histogram distance for automatic detection of defects on patterned textures

    Full text link
    Chi-square histogram distance is one of the distance measures that can be used to find dissimilarity between two histograms. Motivated by the fact that texture discrimination by human vision system is based on second-order statistics, we make use of histogram of gray-level co-occurrence matrix (GLCM) that is based on second-order statistics and propose a new machine vision algorithm for automatic defect detection on patterned textures. Input defective images are split into several periodic blocks and GLCMs are computed after quantizing the gray levels from 0-255 to 0-63 to keep the size of GLCM compact and to reduce computation time. Dissimilarity matrix derived from chi-square distances of the GLCMs is subjected to hierarchical clustering to automatically identify defective and defect-free blocks. Effectiveness of the proposed method is demonstrated through experiments on defective real-fabric images of 2 major wallpaper groups (pmm and p4m groups).Comment: IJCVR, Vol. 2, No. 4, 2011, pp. 302-31

    Model selection in High-Dimensions: A Quadratic-risk based approach

    Full text link
    In this article we propose a general class of risk measures which can be used for data based evaluation of parametric models. The loss function is defined as generalized quadratic distance between the true density and the proposed model. These distances are characterized by a simple quadratic form structure that is adaptable through the choice of a nonnegative definite kernel and a bandwidth parameter. Using asymptotic results for the quadratic distances we build a quick-to-compute approximation for the risk function. Its derivation is analogous to the Akaike Information Criterion (AIC), but unlike AIC, the quadratic risk is a global comparison tool. The method does not require resampling, a great advantage when point estimators are expensive to compute. The method is illustrated using the problem of selecting the number of components in a mixture model, where it is shown that, by using an appropriate kernel, the method is computationally straightforward in arbitrarily high data dimensions. In this same context it is shown that the method has some clear advantages over AIC and BIC.Comment: Updated with reviewer suggestion

    Application of the Iterated Weighted Least-Squares Fit to counting experiments

    Get PDF
    Least-squares fits are an important tool in many data analysis applications. In this paper, we review theoretical results, which are relevant for their application to data from counting experiments. Using a simple example, we illustrate the well known fact that commonly used variants of the least-squares fit applied to Poisson-distributed data produce biased estimates. The bias can be overcome with an iterated weighted least-squares method, which produces results identical to the maximum-likelihood method. For linear models, the iterated weighted least-squares method converges faster than the equivalent maximum-likelihood method, and does not require problem-specific starting values, which may be a practical advantage. The equivalence of both methods also holds for binomially distributed data. We further show that the unbinned maximum-likelihood method can be derived as a limiting case of the iterated least-squares fit when the bin width goes to zero, which demonstrates a deep connection between the two methods.Comment: Accepted by NIM
    • …
    corecore