29 research outputs found

    CAS-CNN: A Deep Convolutional Neural Network for Image Compression Artifact Suppression

    Get PDF
    Lossy image compression algorithms are pervasively used to reduce the size of images transmitted over the web and recorded on data storage media. However, we pay for their high compression rate with visual artifacts degrading the user experience. Deep convolutional neural networks have become a widespread tool to address high-level computer vision tasks very successfully. Recently, they have found their way into the areas of low-level computer vision and image processing to solve regression problems mostly with relatively shallow networks. We present a novel 12-layer deep convolutional network for image compression artifact suppression with hierarchical skip connections and a multi-scale loss function. We achieve a boost of up to 1.79 dB in PSNR over ordinary JPEG and an improvement of up to 0.36 dB over the best previous ConvNet result. We show that a network trained for a specific quality factor (QF) is resilient to the QF used to compress the input image - a single network trained for QF 60 provides a PSNR gain of more than 1.5 dB over the wide QF range from 40 to 76.Comment: 8 page

    Robust Image Watermarking Based on Psychovisual Threshold

    Get PDF
    Because of the facility of accessing and sharing digital images through the internet, digital images are often copied, edited and reused. Digital image watermarking is an approach to protect and manage digital images as intellectual property. The embedding of a natural watermark based on the properties of the human eye can be utilized to effectively hide a watermark image. This paper proposes a watermark embedding scheme based on the psychovisual threshold and edge entropy. The sensitivity of minor changes in DCT coefficients against JPEG quantization tables was investigated. A watermark embedding scheme was designed that offers good resistance against JPEG image compression. The proposed scheme was tested under different types of attacks. The experimental results indicated that the proposed scheme can achieve high imperceptibility and robustness against attacks. The watermark recovery process is also robust against attacks

    An Image Dithering via Tchebichef Moment Transform

    Get PDF
    Many image display applications and printing devices allow only limited number of colours. They have limited computational power and storage to produce high quality outputs on high bit-depth colour image. A dithering technique is called for here in order to improve the perceptual visual quality of the limited bitdepth images. A dithered image is represented by a natural colour in the low bit depth image colour for displaying and printing. This technique obtains low cost colour image in displaying the colour and printing image pixels. This study proposes the dithering technique based on Tchebichef Moment Transform (TMT) to produce high quality image at low-bit colour. Earlier, a 2´2 Discrete Wavelet Transform (DWT) has been proposed for better image quality on dithering. The 2´2 TMT has been chosen here since it performs better than the 2´2 DWT. TMT provides a compact support on 2´2 blocks. The result shows that 2´2 TMT gives perceptually better quality on colour image dithering in significantly efficient fashio

    A Design Method of Saturation Test Image Based on CIEDE2000

    Get PDF
    In order to generate color test image consistent with human perception in aspect of saturation, lightness, and hue of image, we propose a saturation test image design method based on CIEDE2000 color difference formula. This method exploits the subjective saturation parameter C′ of CIEDE2000 to get a series of test images with different saturation but same lightness and hue. It is found experimentally that the vision perception has linear relationship with the saturation parameter C′. This kind of saturation test image has various applications, such as in the checking of color masking effect in visual experiments and the testing of the visual effects of image similarity component

    LFACon: Introducing Anglewise Attention to No-Reference Quality Assessment in Light Field Space

    Full text link
    Light field imaging can capture both the intensity information and the direction information of light rays. It naturally enables a six-degrees-of-freedom viewing experience and deep user engagement in virtual reality. Compared to 2D image assessment, light field image quality assessment (LFIQA) needs to consider not only the image quality in the spatial domain but also the quality consistency in the angular domain. However, there is a lack of metrics to effectively reflect the angular consistency and thus the angular quality of a light field image (LFI). Furthermore, the existing LFIQA metrics suffer from high computational costs due to the excessive data volume of LFIs. In this paper, we propose a novel concept of "anglewise attention" by introducing a multihead self-attention mechanism to the angular domain of an LFI. This mechanism better reflects the LFI quality. In particular, we propose three new attention kernels, including anglewise self-attention, anglewise grid attention, and anglewise central attention. These attention kernels can realize angular self-attention, extract multiangled features globally or selectively, and reduce the computational cost of feature extraction. By effectively incorporating the proposed kernels, we further propose our light field attentional convolutional neural network (LFACon) as an LFIQA metric. Our experimental results show that the proposed LFACon metric significantly outperforms the state-of-the-art LFIQA metrics. For the majority of distortion types, LFACon attains the best performance with lower complexity and less computational time.Comment: Accepted for IEEE VR 2023 (TVCG Special Issues) (Early Access
    corecore