67 research outputs found

    A new mesh visual quality metric using saliency weighting-based pooling strategy

    Full text link
    © 2018 Elsevier Inc. Several metrics have been proposed to assess the visual quality of 3D triangular meshes during the last decade. In this paper, we propose a mesh visual quality metric by integrating mesh saliency into mesh visual quality assessment. We use the Tensor-based Perceptual Distance Measure metric to estimate the local distortions for the mesh, and pool local distortions into a quality score using a saliency weighting-based pooling strategy. Three well-known mesh saliency detection methods are used to demonstrate the superiority and effectiveness of our metric. Experimental results show that our metric with any of three saliency maps performs better than state-of-the-art metrics on the LIRIS/EPFL general-purpose database. We generate a synthetic saliency map by assembling salient regions from individual saliency maps. Experimental results reveal that the synthetic saliency map achieves better performance than individual saliency maps, and the performance gain is closely correlated with the similarity between the individual saliency maps

    Recent Advances in Signal Processing

    Get PDF
    The signal processing task is a very critical issue in the majority of new technological inventions and challenges in a variety of applications in both science and engineering fields. Classical signal processing techniques have largely worked with mathematical models that are linear, local, stationary, and Gaussian. They have always favored closed-form tractability over real-world accuracy. These constraints were imposed by the lack of powerful computing tools. During the last few decades, signal processing theories, developments, and applications have matured rapidly and now include tools from many areas of mathematics, computer science, physics, and engineering. This book is targeted primarily toward both students and researchers who want to be exposed to a wide variety of signal processing techniques and algorithms. It includes 27 chapters that can be categorized into five different areas depending on the application at hand. These five categories are ordered to address image processing, speech processing, communication systems, time-series analysis, and educational packages respectively. The book has the advantage of providing a collection of applications that are completely independent and self-contained; thus, the interested reader can choose any chapter and skip to another without losing continuity

    Perceptual Quality Evaluation of 3D Triangle Mesh: A Technical Review

    Full text link
    © 2018 IEEE. During mesh processing operations (e.g. simplifications, compression, and watermarking), a 3D triangle mesh is subject to various visible distortions on mesh surface which result in a need to estimate visual quality. The necessity of perceptual quality evaluation is already established, as in most cases, human beings are the end users of 3D meshes. To measure such kinds of distortions, the metrics that consider geometric measures integrating human visual system (HVS) is called perceptual quality metrics. In this paper, we direct an expansive study on 3D mesh quality evaluation mostly focusing on recently proposed perceptual based metrics. We limit our study on greyscale static mesh evaluation and attempt to figure out the most workable method for real-Time evaluation by making a quantitative comparison. This paper also discusses in detail how to evaluate objective metric's performance with existing subjective databases. In this work, we likewise research the utilization of the psychometric function to expel non-linearity between subjective and objective values. Finally, we draw a comparison among some selected quality metrics and it shows that curvature tensor based quality metrics predicts consistent result in terms of correlation

    Adaptive CSLBP compressed image hashing

    Get PDF
    Hashing is popular technique of image authentication to identify malicious attacks and it also allows appearance changes in an image in controlled way. Image hashing is quality summarization of images. Quality summarization implies extraction and representation of powerful low level features in compact form. Proposed adaptive CSLBP compressed hashing method uses modified CSLBP (Center Symmetric Local Binary Pattern) as a basic method for texture extraction and color weight factor derived from L*a*b* color space. Image hash is generated from image texture. Color weight factors are used adaptively in average and difference forms to enhance discrimination capability of hash. For smooth region, averaging of colours used while for non-smooth region, color differencing is used. Adaptive CSLBP histogram is a compressed form of CSLBP and its quality is improved by adaptive color weight factor. Experimental results are demonstrated with two benchmarks, normalized hamming distance and ROC characteristics. Proposed method successfully differentiate between content change and content persevering modifications for color images

    Multisensor Concealed Weapon Detection Using the Image Fusion Approach

    Get PDF
    Detection of concealed weapons is an increasingly important problem for both military and police since global terrorism and crime have grown as threats over the years. This work presents two image fusion algorithms, one at pixel level and another at feature level, for efficient concealed weapon detection application. Both the algorithms presented in this work are based on the double-density dual-tree complex wavelet transform (DDDTCWT). In the pixel level fusion scheme, the fusion of low frequency band coefficients is determined by the local contrast, while the high frequency band fusion rule is developed with consideration of both texture feature of the human visual system (HVS) and local energy basis. In the feature level fusion algorithm, features are exacted using Gaussian Mixture model (GMM) based multiscale segmentation approach and the fusion rules are developed based on region activity measurement. Experiment results demonstrate the robustness and efficiency of the proposed algorithms

    Signal processing algorithms for enhanced image fusion performance and assessment

    Get PDF
    The dissertation presents several signal processing algorithms for image fusion in noisy multimodal conditions. It introduces a novel image fusion method which performs well for image sets heavily corrupted by noise. As opposed to current image fusion schemes, the method has no requirements for a priori knowledge of the noise component. The image is decomposed with Chebyshev polynomials (CP) being used as basis functions to perform fusion at feature level. The properties of CP, namely fast convergence and smooth approximation, renders it ideal for heuristic and indiscriminate denoising fusion tasks. Quantitative evaluation using objective fusion assessment methods show favourable performance of the proposed scheme compared to previous efforts on image fusion, notably in heavily corrupted images. The approach is further improved by incorporating the advantages of CP with a state-of-the-art fusion technique named independent component analysis (ICA), for joint-fusion processing based on region saliency. Whilst CP fusion is robust under severe noise conditions, it is prone to eliminating high frequency information of the images involved, thereby limiting image sharpness. Fusion using ICA, on the other hand, performs well in transferring edges and other salient features of the input images into the composite output. The combination of both methods, coupled with several mathematical morphological operations in an algorithm fusion framework, is considered a viable solution. Again, according to the quantitative metrics the results of our proposed approach are very encouraging as far as joint fusion and denoising are concerned. Another focus of this dissertation is on a novel metric for image fusion evaluation that is based on texture. The conservation of background textural details is considered important in many fusion applications as they help define the image depth and structure, which may prove crucial in many surveillance and remote sensing applications. Our work aims to evaluate the performance of image fusion algorithms based on their ability to retain textural details from the fusion process. This is done by utilising the gray-level co-occurrence matrix (GLCM) model to extract second-order statistical features for the derivation of an image textural measure, which is then used to replace the edge-based calculations in an objective-based fusion metric. Performance evaluation on established fusion methods verifies that the proposed metric is viable, especially for multimodal scenarios

    CONTENT BASED IMAGE RETRIEVAL (CBIR) SYSTEM

    Get PDF
    Advancement in hardware and telecommunication technology has boosted up creation and distribution of digital visual content. However this rapid growth of visual content creations has not been matched by the simultaneous emergence of technologies to support efficient image analysis and retrieval. Although there are attempt to solve this problem by using meta-data text annotation but this approach are not practical when it come to the large number of data collection. This system used 7 different feature vectors that are focusing on 3 main low level feature groups (color, shape and texture). This system will use the image that the user feed and search the similar images in the database that had similar feature by considering the threshold value. One of the most important aspects in CBIR is to determine the correct threshold value. Setting the correct threshold value is important in CBIR because setting it too low will result in less image being retrieve that might exclude relevant data. Setting to high threshold value might result in irrelevant data to be retrieved and increase the search time for image retrieval. Result show that this project able to increase the image accuracy to average 70% by combining 7 different feature vector at correct threshold value. ii

    The application of visual saliency models in objective image quality assessment: a statistical evaluation

    Get PDF
    Advances in image quality assessment have shown the potential added value of including visual attention aspects in its objective assessment. Numerous models of visual saliency are implemented and integrated in different image quality metrics (IQMs), but the gain in reliability of the resulting IQMs varies to a large extent. The causes and the trends of this variation would be highly beneficial for further improvement of IQMs, but are not fully understood. In this paper, an exhaustive statistical evaluation is conducted to justify the added value of computational saliency in objective image quality assessment, using 20 state-of-the-art saliency models and 12 best-known IQMs. Quantitative results show that the difference in predicting human fixations between saliency models is sufficient to yield a significant difference in performance gain when adding these saliency models to IQMs. However, surprisingly, the extent to which an IQM can profit from adding a saliency model does not appear to have direct relevance to how well this saliency model can predict human fixations. Our statistical analysis provides useful guidance for applying saliency models in IQMs, in terms of the effect of saliency model dependence, IQM dependence, and image distortion dependence. The testbed and software are made publicly available to the research community

    Purkinje images: Conveying different content for different luminance adaptations in a single image

    Get PDF
    Providing multiple meanings in a single piece of art has always been intriguing to both artists and observers. We present Purkinje images, which have different interpretations depending on the luminance adaptation of the observer. Finding such images is an optimization that minimizes the sum of the distance to one reference image in photopic conditions and the distance to another reference image in scotopic conditions. To model the shift of image perception between day and night vision, we decompose the input images into a Laplacian pyramid. Distances under different observation conditions in this representation are independent between pyramid levels and pixel positions and become matrix multiplications. The optimal pixel colour can be found by inverting a small, per-pixel linear system in real time on a GPU. Finally, two user studies analyze our results in terms of the recognition performance and fidelity with respect to the reference images. Providing multiple meanings in a single piece of art has always been intriguing to both artists and observers. We present Purkinje images, which have different interpretations depending on the luminance adaptation of the observer. Finding such images is an optimization that minimizes the sum of the distance to one reference image in photopic conditions and the distance to another reference image in scotopic conditions. To model the shift of image perception between day and night vision, we decompose the input images into a Laplacian pyramid. © 2014 The Eurographics Association and John Wiley & Sons Ltd
    corecore