14 research outputs found
JPEG steganography: A performance evaluation of quantization tables
The two most important aspects of any image based steganographic system are the imperceptibility and the capacity of the stego image. This paper evaluates the performance and efficiency of using optimized quantization tables instead of default JPEG tables within JPEG steganography. We found that using optimized tables significantly improves the quality of stego-images. Moreover, we used this optimization strategy to generate a 16x16 quantization table to be used instead of that suggested. The quality of stego-images was greatly improved when these optimized tables were used. This led us to suggest a new hybrid steganographic method in order to increase the embedding capacity. This new method is based on both and Jpeg-Jsteg methods. In this method, for each 16x16 quantized DCT block, the least two significant bits (2-LSBs) of each middle frequency coefficient are modified to embed two secret bits. Additionally, the Jpeg-Jsteg embedding technique is used for the low frequency DCT coefficients without modifying the DC coefficient. Our experimental results show that the proposed approach can provide a higher information-hiding capacity than the other methods tested. Furthermore, the quality of the produced stego-images is better than that of other methods which use the default tables
Recommended from our members
A content-aware quantisation mechanism for transform domain distributed video coding
The discrete cosine transform (DCT) is widely applied in modern codecs to remove spatial redundancies, with the resulting DCT coefficients being quantised to achieve compression as well as bit-rate control. In distributed video coding (DVC) architectures like DISCOVER, DCT coefficient quantisation is traditionally performed using predetermined quantisation matrices (QM), which means the compression is heavily dependent on the sequence being coded. This makes bit-rate control challenging, with the situation exacerbated in the coding of high resolution sequences due to QM scarcity and the non-uniform bit-rate gaps between them. This paper introduces a novel content-aware quantisation (CAQ) mechanism to overcome the limitations of existing quantisation methods in transform domain DVC. CAQ creates a frame-specific QM to reduce quantisation errors by analysing the distribution of DCT coefficients. In contrast to the predetermined QM that is applicable to only 4x4 block sizes, CAQ produces QM for larger block sizes to enhance compression at higher resolutions. This provides superior bit-rate control and better output quality by seeking to fully exploit the available bandwidth, which is especially beneficial in bandwidth constrained scenarios. In addition, CAQ generates superior perceptual results by innovatively applying different weightings to the DCT coefficients to reflect the human visual system. Experimental results corroborate that CAQ both quantitatively and qualitatively provides enhanced output quality in bandwidth limited scenarios, by consistently utilising over 90% of available bandwidth
Adaptive Quantization Matrices for HD and UHD Display Resolutions in Scalable HEVC
HEVC contains an option to enable custom quantization matrices, which are
designed based on the Human Visual System and a 2D Contrast Sensitivity
Function. Visual Display Units, capable of displaying video data at High
Definition and Ultra HD display resolutions, are frequently utilized on a
global scale. Video compression artifacts that are present due to high levels
of quantization, which are typically inconspicuous in low display resolution
environments, are clearly visible on HD and UHD video data and VDUs. The
default QM technique in HEVC does not take into account the video data
resolution, nor does it take into consideration the associated display
resolution of a VDU to determine the appropriate levels of quantization
required to reduce unwanted video compression artifacts. Based on this fact, we
propose a novel, adaptive quantization matrix technique for the HEVC standard,
including Scalable HEVC. Our technique, which is based on a refinement of the
current HVS-CSF QM approach in HEVC, takes into consideration the display
resolution of the target VDU for the purpose of minimizing video compression
artifacts. In SHVC SHM 9.0, and compared with anchors, the proposed technique
yields important quality and coding improvements for the Random Access
configuration, with a maximum of 56.5% luma BD-Rate reductions in the
enhancement layer. Furthermore, compared with the default QMs and the Sony QMs,
our method yields encoding time reductions of 0.75% and 1.19%, respectively.Comment: Data Compression Conference 201
Analysis of JPEG Digital Image Compression Process
JPEG is the most often used image compression standard that is used since 1992. It is a lossy compression method, and is widely used in digital cameras and mobile phones. Depending on the parameters and user needs, it can achieve a compression ratio between 10 and 50. Memory for digital image storage is saved on the expense of decompressed image quality. The method is based on the Discrete Cosine Transform (DCT) that separates the image into its different frequency components. This paper shows how different parameters of the algorithm influence the performance of the compression. In the end, ideas are given how to either increase the compression ratio keeping the same decompressed image quality, or to improve the quality without decreasing the compression ratio. The quality between the original and the decompressed images is measured using two objective criteria: the Peak Signal-to-Noise Ratio (PSNR) and the structural similarity index (SSIM)
Image Compression and Watermarking scheme using Scalar Quantization
This paper presents a new compression technique and image watermarking
algorithm based on Contourlet Transform (CT). For image compression, an energy
based quantization is used. Scalar quantization is explored for image
watermarking. Double filter bank structure is used in CT. The Laplacian Pyramid
(LP) is used to capture the point discontinuities, and then followed by a
Directional Filter Bank (DFB) to link point discontinuities. The coefficients
of down sampled low pass version of LP decomposed image are re-ordered in a
pre-determined manner and prediction algorithm is used to reduce entropy
(bits/pixel). In addition, the coefficients of CT are quantized based on the
energy in the particular band. The superiority of proposed algorithm to JPEG is
observed in terms of reduced blocking artifacts. The results are also compared
with wavelet transform (WT). Superiority of CT to WT is observed when the image
contains more contours. The watermark image is embedded in the low pass image
of contourlet decomposition. The watermark can be extracted with minimum error.
In terms of PSNR, the visual quality of the watermarked image is exceptional.
The proposed algorithm is robust to many image attacks and suitable for
copyright protection applications.Comment: 11 Pages, IJNGN Journal 201
Minimizing compression artifacts for high resolutions with adaptive quantization matrices for HEVC
Visual Display Units (VDUs), capable of displaying video data at High Definition (HD) and Ultra HD (UHD) resolutions, are frequently employed in a variety of technological domains. Quantization-induced video compression artifacts, which are usually unnoticeable in low resolution environments, are typically conspicuous on high resolution VDUs and video data. The default quantization matrices (QMs) in HEVC do not take into account specific display resolutions of VDUs or video data to determine the appropriate levels of quantization required to reduce unwanted compression artifacts. Therefore, we propose a novel, adaptive quantization matrix technique for the HEVC standard including Scalable HEVC (SHVC). Our technique, which is based on a refinement of the current QM technique in HEVC, takes into consideration specific display resolutions of the target VDUs in order to minimize compression artifacts. We undertake a thorough evaluation of the proposed technique by utilizing SHVC SHM 9.0 (two-layered bit-stream) and the BD-Rate and SSIM metrics. For the BD-Rate evaluation, the proposed method achieves maximum BD-Rate reductions of 56.5% in the enhancement layer. For the SSIM evaluation, our technique achieves a maximum structural improvement of 0.8660 vs. 0.8538
Recommended from our members
Steganography-based secret and reliable communications: Improving steganographic capacity and imperceptibility
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.Unlike encryption, steganography hides the very existence of secret information rather than hiding its meaning only. Image based steganography is the most common system used since digital images are widely used over the Internet and Web. However, the capacity is mostly limited and restricted by the size of cover images. In addition, there is a tradeoff between both steganographic capacity and stego image quality. Therefore, increasing steganographic capacity and enhancing stego image quality are still challenges, and this is exactly our research main aim. Related to this, we also investigate hiding secret information in communication protocols, namely Simple Object Access Protocol (SOAP) message, rather than in conventional digital files.
To get a high steganographic capacity, two novel steganography methods were proposed. The first method was based on using 16x16 non-overlapping blocks and quantisation table for Joint Photographic Experts Group (JPEG) compression instead of 8x8. Then, the quality of JPEG stego images was enhanced by using optimised quantisation tables instead of the default tables. The second method, the hybrid method, was based on using optimised quantisation tables and two hiding techniques: JSteg along with our first proposed method. To increase the
steganographic capacity, the impact of hiding data within image chrominance was
investigated and explained. Since peak signal-to-noise ratio (PSNR) is extensively
used as a quality measure of stego images, the reliability of PSNR for stego images was also evaluated in the work described in this thesis. Finally, to eliminate any detectable traces that traditional steganography may leave in stego files, a novel and undetectable steganography method based on SOAP messages was proposed.
All methods proposed have been empirically validated as to indicate their utility
and value. The results revealed that our methods and suggestions improved the main aspects of image steganography. Nevertheless, PSNR was found not to be a
reliable quality evaluation measure to be used with stego image. On the other hand, information hiding in SOAP messages represented a distinctive way for undetectable and secret communication.The Ministry of Higher Education in Syria
and the University of Alepp