567 research outputs found
Multiplicative Multiresolution Decomposition for Lossless Volumetric Medical Images Compression
With the emergence of medical imaging, the compression of volumetric medical images is essential. For this purpose, we propose a novel Multiplicative Multiresolution Decomposition (MMD) wavelet coding scheme for lossless compression of volumetric medical images. The MMD is used in speckle reduction technique but offers some proprieties which can be exploited in compression. Thus, as the wavelet transform the MMD provides a hierarchical representation and offers a possibility to realize lossless compression. We integrate in proposed scheme an inter slice filter based on wavelet transform and motion compensation to reduce data energy efficiently. We compare lossless results of classical wavelet coders such as 3D SPIHT and JP3D to the proposed scheme. This scheme incorporates MMD in lossless compression technique by applying MMD/wavelet or MMD transform to each slice, after inter slice filter is employed and the resulting sub-bands are coded by the 3D zero-tree algorithm SPIHT. Lossless experimental results show that the proposed scheme with the MMD can achieve lowest bit rates compared to 3D SPIHT and JP3D
RLFC: Random Access Light Field Compression using Key Views and Bounded Integer Encoding
We present a new hierarchical compression scheme for encoding light field
images (LFI) that is suitable for interactive rendering. Our method (RLFC)
exploits redundancies in the light field images by constructing a tree
structure. The top level (root) of the tree captures the common high-level
details across the LFI, and other levels (children) of the tree capture
specific low-level details of the LFI. Our decompressing algorithm corresponds
to tree traversal operations and gathers the values stored at different levels
of the tree. Furthermore, we use bounded integer sequence encoding which
provides random access and fast hardware decoding for compressing the blocks of
children of the tree. We have evaluated our method for 4D two-plane
parameterized light fields. The compression rates vary from 0.08 - 2.5 bits per
pixel (bpp), resulting in compression ratios of around 200:1 to 20:1 for a PSNR
quality of 40 to 50 dB. The decompression times for decoding the blocks of LFI
are 1 - 3 microseconds per channel on an NVIDIA GTX-960 and we can render new
views with a resolution of 512X512 at 200 fps. Our overall scheme is simple to
implement and involves only bit manipulations and integer arithmetic
operations.Comment: Accepted for publication at Symposium on Interactive 3D Graphics and
Games (I3D '19
Layer Selection in Progressive Transmission of Motion-Compensated JPEG2000 Video
MCJ2K (Motion-Compensated JPEG2000) is a video codec based on MCTF (Motion- Compensated Temporal Filtering) and J2K (JPEG2000). MCTF analyzes a sequence of images, generating a collection of temporal sub-bands, which are compressed with J2K. The R/D (Rate-Distortion) performance in MCJ2K is better than the MJ2K (Motion JPEG2000) extension, especially if there is a high level of temporal redundancy. MCJ2K codestreams can be served by standard JPIP (J2K Interactive Protocol) servers, thanks to the use of only J2K standard file formats. In bandwidth-constrained scenarios, an important issue in MCJ2K is determining the amount of data of each temporal sub-band that must be transmitted to maximize the quality of the reconstructions at the client side. To solve this problem, we have proposed two rate-allocation algorithms which provide reconstructions that are progressive in quality. The first, OSLA (Optimized Sub-band Layers Allocation), determines the best progression of quality layers, but is computationally expensive. The second, ESLA (Estimated-Slope sub-band Layers Allocation), is sub-optimal in most cases, but much faster and more convenient for real-time streaming scenarios. An experimental comparison shows that even when a straightforward motion compensation scheme is used, the R/D performance of MCJ2K competitive is compared not only to MJ2K, but also with respect to other standard scalable video codecs
Overview of Image Processing and Various Compression Schemes
Image processing is key research among researchers. Compression of images are required when need of transmission or storage of images. Demand of multimedia growth, contributes to insufficient bandwidth of network and memory storage device. Advance imaging requires capacity of extensive amounts of digitized information. Therefore data compression is more required for reducing data redundancy to save more hardware space and transmission bandwidth. Various techniques are given for image compression. Some of which are discussed in this paper
Efficient compression of motion compensated residuals
EThOS - Electronic Theses Online ServiceGBUnited Kingdo
Motion estimation and signaling techniques for 2D+t scalable video coding
We describe a fully scalable wavelet-based 2D+t (in-band) video coding architecture. We propose new coding tools specifically designed for this framework aimed at two goals: reduce the computational complexity at the encoder without sacrificing compression; improve the coding efficiency, especially at low bitrates. To this end, we focus our attention on motion estimation and motion vector encoding. We propose a fast motion estimation algorithm that works in the wavelet domain and exploits the geometrical properties of the wavelet subbands. We show that the computational complexity grows linearly with the size of the search window, yet approaching the performance of a full search strategy. We extend the proposed motion estimation algorithm to work with blocks of variable sizes, in order to better capture local motion characteristics, thus improving in terms of rate-distortion behavior. Given this motion field representation, we propose a motion vector coding algorithm that allows to adaptively scale the motion bit budget according to the target bitrate, improving the coding efficiency at low bitrates. Finally, we show how to optimally scale the motion field when the sequence is decoded at reduced spatial resolution. Experimental results illustrate the advantages of each individual coding tool presented in this paper. Based on these simulations, we define the best configuration of coding parameters and we compare the proposed codec with MC-EZBC, a widely used reference codec implementing the t+2D framework
- …