5,098 research outputs found
Comparative performance analysis of image compression by JPEG 2000: a case study on medical images
JPEG 2000 is a new and improved, image-coding standard developed for compression of images. JPEG 2000 is the state-of-the-art image-coding standard that resulted from the joint efforts of the International Standards Organization (ISO) and the International Telecommunications Union (ITU); JPEG in JPEG 2000 is an acronym for Joint Picture Experts Group. The new standard outperforms the older JPEG standard by approximately 2 dB of Peak Signal to Noise Ratio (PSNR) for several images across all compression ratios. Reasons behind JPEG 2000 is superior performance are the wavelet transform and Embedded Block Coding with Optimal Truncation (EBCOT). This study describes the performance comparison of JPEG 2000 against its predecessor JPEG, by evaluating image compressions for medical images. The present research further describes the most important parameters of this new standard in order to help resolve design tradeoffs
JPEG 2000 Encoding with Perceptual Distortion Control
An alternative approach has been devised for encoding image data in compliance with JPEG 2000, the most recent still-image data-compression standard of the Joint Photographic Experts Group. Heretofore, JPEG 2000 encoding has been implemented by several related schemes classified as rate-based distortion-minimization encoding. In each of these schemes, the end user specifies a desired bit rate and the encoding algorithm strives to attain that rate while minimizing a mean squared error (MSE). While rate-based distortion minimization is appropriate for transmitting data over a limited-bandwidth channel, it is not the best approach for applications in which the perceptual quality of reconstructed images is a major consideration. A better approach for such applications is the present alternative one, denoted perceptual distortion control, in which the encoding algorithm strives to compress data to the lowest bit rate that yields at least a specified level of perceptual image quality. Some additional background information on JPEG 2000 is prerequisite to a meaningful summary of JPEG encoding with perceptual distortion control. The JPEG 2000 encoding process includes two subprocesses known as tier-1 and tier-2 coding. In order to minimize the MSE for the desired bit rate, a rate-distortion- optimization subprocess is introduced between the tier-1 and tier-2 subprocesses. In tier-1 coding, each coding block is independently bit-plane coded from the most-significant-bit (MSB) plane to the least-significant-bit (LSB) plane, using three coding passes (except for the MSB plane, which is coded using only one "clean up" coding pass). For M bit planes, this subprocess involves a total number of (3M - 2) coding passes. An embedded bit stream is then generated for each coding block. Information on the reduction in distortion and the increase in the bit rate associated with each coding pass is collected. This information is then used in a rate-control procedure to determine the contribution of each coding block to the output compressed bit stream
High Efficiency Concurrent Embedded Block Coding Architecture for JPEG 2000
[[abstract]]Embedded block coding with optimized truncation (EBCOT) is the most important part of JPEG 2000. Due to the bit level operation and the three-pass scanning technique, the EBCOT may take more than 50% operation time in the JPEG 2000. This paper presents a high efficiency concurrent EBCOT(HECEBC) entropy encoder hardware architecture. The proposed HECEBC can concurrently process the four samples in a stripe column. Furthermore this architecture can be extended to process several stripe columns concurrently for the JPEG 2000 to accomplish high resolution applications in real time. Besides, the HECEBC uses the technique of concentrated context window to stabilize the Context-Decision (CX-D) output to relax the load in between the arithmetic encoder (AE) and the parallel-in-serial-out (PISO) buffer to triple the EBC performance.[[notice]]補正完畢[[incitationindex]]EI[[booktype]]紙
Sample-Parallel Execution of EBCOT in Fast Mode
JPEG 2000’s most computationally expensive building
block is the Embedded Block Coder with Optimized Truncation
(EBCOT). This paper evaluates how encoders targeting a parallel
architecture such as a GPU can increase their throughput in use
cases where very high data rates are used. The compression
efficiency in the less significant bit-planes is then often poor and
it is beneficial to enable the Selective Arithmetic Coding Bypass
style (fast mode) in order to trade a small loss in compression
efficiency for a reduction of the computational complexity. More
importantly, this style exposes a more finely grained parallelism
that can be exploited to execute the raw coding passes, including
bit-stuffing, in a sample-parallel fashion. For a latency- or
memory critical application that encodes one frame at a time,
EBCOT’s tier-1 is sped up between 1.1x and 2.4x compared to an
optimized GPU-based implementation. When a low GPU
occupancy has already been addressed by encoding multiple
frames in parallel, the throughput can still be improved by 5%
for high-entropy images and 27% for low-entropy images. Best
results are obtained when enabling the fast mode after the fourth
significant bit-plane. For most of the test images the compression
rate is within 1% of the original
Evaluation of GPU/CPU Co-Processing Models for JPEG 2000 Packetization
With the bottom-line goal of increasing the
throughput of a GPU-accelerated JPEG 2000 encoder, this paper
evaluates whether the post-compression rate control and
packetization routines should be carried out on the CPU or on
the GPU. Three co-processing models that differ in how the
workload is split among the CPU and GPU are introduced. Both
routines are discussed and algorithms for executing them in
parallel are presented. Experimental results for compressing a
detail-rich UHD sequence to 4 bits/sample indicate speed-ups of
200x for the rate control and 100x for the packetization
compared to the single-threaded implementation in the
commercial Kakadu library. These two routines executed on the
CPU take 4x as long as all remaining coding steps on the GPU
and therefore present a bottleneck. Even if the CPU bottleneck
could be avoided with multi-threading, it is still beneficial to
execute all coding steps on the GPU as this minimizes the
required device-to-host transfer and thereby speeds up the
critical path from 17.2 fps to 19.5 fps for 4 bits/sample and to
22.4 fps for 0.16 bits/sample
Recommended from our members
High capacity steganographic method based upon JPEG
The two most important aspects of any image-based
steganographic system are the quality of the stegoimage and the capacity of the cover image. This paper proposes a novel and high capacity steganographic approach based on Discrete Cosine Transformation (DCT) and JPEG compression. JPEG technique divides the input image into non-overlapping blocks of 8x8 pixels and uses the DCT transformation. However, our proposed method divides the cover image into nonoverlapping
blocks of 16x16 pixels. For each quantized
DCT block, the least two-significant bits (2-LSBs) of each middle frequency coefficient are modified to embed two secret bits. Our aim is to investigate the data hiding efficiency using larger blocks for JPEG compression. Our experiment result shows that the proposed approach can provide a higher information hiding capacity than Jpeg-Jsteg and Chang et al. methods based on the conventional blocks of 8x8 pixels. Furthermore, the produced stego-images are almost identical to the original cover images
A novel steganography approach for audio files
We present a novel robust and secure steganography technique to hide images into audio files aiming at increasing the carrier medium capacity. The audio files are in the standard WAV format, which is based on the LSB algorithm while images are compressed by the GMPR technique which is based on the Discrete Cosine Transform (DCT) and high frequency minimization encoding algorithm. The method involves compression-encryption of an image file by the GMPR technique followed by hiding it into audio data by appropriate bit substitution. The maximum number of bits without significant effect on audio signal for LSB audio steganography is 6 LSBs. The encrypted image bits are hidden into variable and multiple LSB layers in the proposed method. Experimental results from observed listening tests show that there is no significant difference between the stego audio reconstructed from the novel technique and the original signal. A performance evaluation has been carried out according to quality measurement criteria of Signal-to-Noise Ratio (SNR) and Peak Signal-to-Noise Ratio (PSNR)
- …