6,568 research outputs found

    A Perceptually Based Comparison of Image Similarity Metrics

    Full text link
    The assessment of how well one image matches another forms a critical component both of models of human visual processing and of many image analysis systems. Two of the most commonly used norms for quantifying image similarity are L1 and L2, which are specific instances of the Minkowski metric. However, there is often not a principled reason for selecting one norm over the other. One way to address this problem is by examining whether one metric, better than the other, captures the perceptual notion of image similarity. This can be used to derive inferences regarding similarity criteria the human visual system uses, as well as to evaluate and design metrics for use in image-analysis applications. With this goal, we examined perceptual preferences for images retrieved on the basis of the L1 versus the L2 norm. These images were either small fragments without recognizable content, or larger patterns with recognizable content created by vector quantization. In both conditions the participants showed a small but consistent preference for images matched with the L1 metric. These results suggest that, in the domain of natural images of the kind we have used, the L1 metric may better capture human notions of image similarity

    The Unreasonable Effectiveness of Deep Features as a Perceptual Metric

    Full text link
    While it is nearly effortless for humans to quickly assess the perceptual similarity between two images, the underlying processes are thought to be quite complex. Despite this, the most widely used perceptual metrics today, such as PSNR and SSIM, are simple, shallow functions, and fail to account for many nuances of human perception. Recently, the deep learning community has found that features of the VGG network trained on ImageNet classification has been remarkably useful as a training loss for image synthesis. But how perceptual are these so-called "perceptual losses"? What elements are critical for their success? To answer these questions, we introduce a new dataset of human perceptual similarity judgments. We systematically evaluate deep features across different architectures and tasks and compare them with classic metrics. We find that deep features outperform all previous metrics by large margins on our dataset. More surprisingly, this result is not restricted to ImageNet-trained VGG features, but holds across different deep architectures and levels of supervision (supervised, self-supervised, or even unsupervised). Our results suggest that perceptual similarity is an emergent property shared across deep visual representations.Comment: Accepted to CVPR 2018; Code and data available at https://www.github.com/richzhang/PerceptualSimilarit

    Copyright Protection of Color Imaging Using Robust-Encoded Watermarking

    Get PDF
    In this paper we present a robust-encoded watermarking method applied to color images for copyright protection, which presents robustness against several geometric and signal processing distortions. Trade-off between payload, robustness and imperceptibility is a very important aspect which has to be considered when a watermark algorithm is designed. In our proposed scheme, previously to be embedded into the image, the watermark signal is encoded using a convolutional encoder, which can perform forward error correction achieving better robustness performance. Then, the embedding process is carried out through the discrete cosine transform domain (DCT) of an image using the image normalization technique to accomplish robustness against geometric and signal processing distortions. The embedded watermark coded bits are extracted and decoded using the Viterbi algorithm. In order to determine the presence or absence of the watermark into the image we compute the bit error rate (BER) between the recovered and the original watermark data sequence. The quality of the watermarked image is measured using the well-known indices: Peak Signal to Noise Ratio (PSNR), Visual Information Fidelity (VIF) and Structural Similarity Index (SSIM). The color difference between the watermarked and original images is obtained by using the Normalized Color Difference (NCD) measure. The experimental results show that the proposed method provides good performance in terms of imperceptibility and robustness. The comparison among the proposed and previously reported methods based on different techniques is also provided

    Contrast Enhancement of Brightness-Distorted Images by Improved Adaptive Gamma Correction

    Full text link
    As an efficient image contrast enhancement (CE) tool, adaptive gamma correction (AGC) was previously proposed by relating gamma parameter with cumulative distribution function (CDF) of the pixel gray levels within an image. ACG deals well with most dimmed images, but fails for globally bright images and the dimmed images with local bright regions. Such two categories of brightness-distorted images are universal in real scenarios, such as improper exposure and white object regions. In order to attenuate such deficiencies, here we propose an improved AGC algorithm. The novel strategy of negative images is used to realize CE of the bright images, and the gamma correction modulated by truncated CDF is employed to enhance the dimmed ones. As such, local over-enhancement and structure distortion can be alleviated. Both qualitative and quantitative experimental results show that our proposed method yields consistently good CE results
    • …
    corecore