1,860 research outputs found

    Object Discovery via Cohesion Measurement

    Full text link
    Color and intensity are two important components in an image. Usually, groups of image pixels, which are similar in color or intensity, are an informative representation for an object. They are therefore particularly suitable for computer vision tasks, such as saliency detection and object proposal generation. However, image pixels, which share a similar real-world color, may be quite different since colors are often distorted by intensity. In this paper, we reinvestigate the affinity matrices originally used in image segmentation methods based on spectral clustering. A new affinity matrix, which is robust to color distortions, is formulated for object discovery. Moreover, a Cohesion Measurement (CM) for object regions is also derived based on the formulated affinity matrix. Based on the new Cohesion Measurement, a novel object discovery method is proposed to discover objects latent in an image by utilizing the eigenvectors of the affinity matrix. Then we apply the proposed method to both saliency detection and object proposal generation. Experimental results on several evaluation benchmarks demonstrate that the proposed CM based method has achieved promising performance for these two tasks.Comment: 14 pages, 14 figure

    Defect Detection for Patterned Fabric Images Based on GHOG and Low-Rank Decomposition

    Get PDF
    In contrast to defect-free fabric images with macro-homogeneous textures and regular patterns, the fabric images with the defect are characterized by the defect regions that are salient and sparse among the redundant background. Therefore, as an effective tool for separating an image into a redundant part (the background) and sparse part (the defect), the low-rank decomposition model provides an ideal solution for patterned fabric defect detection. In this paper, a novel patterned method for fabric defect detection is proposed based on a novel texture descriptor and the low-rank decomposition model. First, an efficient second-order orientation-aware descriptor, denoted as GHOG, is designed by combining Gabor and histogram of oriented gradient (HOG). In addition, a spatial pooling strategy based on human vision mechanism is utilized to further improve the discrimination ability of the proposed descriptor. The proposed texture descriptor can make the defect-free image blocks lay in a low-rank subspace, while the defective image blocks have deviated from this subspace. Then, a constructed low-rank decomposition model divides the feature matrix generated from all the image blocks into a low-rank part, which represents the defect-free background, and a sparse part, which represents sparse defects. In addition, a non-convex log det as a smooth surrogate function is utilized to improve the efficiency of the constructed low-rank model. Finally, the defects are localized by segmenting the saliency map generated by the sparse matrix. The qualitative results and quantitative evaluation results demonstrate that the proposed method improves the detection accuracy and self-adaptivity comparing with the state-of-the-art methods

    Multi-focus image fusion using maximum symmetric surround saliency detection

    Get PDF
    In digital photography, two or more objects of a scene cannot be focused at the same time. If we focus one object, we may lose information about other objects and vice versa. Multi-focus image fusion is a process of generating an all-in-focus image from several out-of-focus images. In this paper, we propose a new multi-focus image fusion method based on two-scale image decomposition and saliency detection using maximum symmetric surround. This method is very beneficial because the saliency map used in this method can highlight the saliency information present in the source images with well defined boundaries. A weight map construction method based on saliency information is developed in this paper. This weight map can identify the focus and defocus regions present in the image very well. So we implemented a new fusion algorithm based on weight map which integrate only focused region information into the fused image. Unlike multi-scale image fusion methods, in this method two-scale image decomposition is sufficient. So, it is computationally efficient. Proposed method is tested on several multi-focus image datasets and it is compared with traditional and recently proposed fusion methods using various fusion metrics. Results justify that our proposed method gives stable and promising performance when compared to that of the existing methods

    Salient Frame Detection for Molecular Dynamics Simulations

    Get PDF
    Recent advances in sophisticated computational techniques have facilitated simulation of incrediblydetailed time-varying trajectories and in the process have generated vast quantities of simulation data. The current tools to analyze and comprehend large-scale time-varying data, however, lag far behind our ability to produce such simulation data. Saliency-based analysis can be applied to time-varying 3D datasets for the purpose of summarization, abstraction, and motion analysis. As the sizes of time-varying datasets continue to grow, it becomes more and more difficult to comprehend vast amounts of data and information in a short period of time. In this paper, we use eigenanalysis to generate orthogonal basis functions over sliding windows to characterize regions of unusual deviations and significant trends. Our results show that motion subspaces provide an effective technique for summarization of large molecular dynamics trajectories
    • …
    corecore