943 research outputs found
Subband Image Coding with Jointly Optimized Quantizers
An iterative design algorithm for the joint design of complexity- and entropy-constrained subband quantizers and associated entropy coders is proposed. Unlike conventional subband design algorithms, the proposed algorithm does not require the use of various bit allocation algorithms. Multistage residual quantizers are employed here because they provide greater control of the complexity-performance tradeoffs, and also because they allow efficient and effective high-order statistical modeling. The resulting subband coder exploits statistical dependencies within subbands, across subbands, and across stages, mainly through complexity-constrained high-order entropy coding. Experimental results demonstrate that the complexity-rate-distortion performance of the new subband coder is exceptional
Multiresolution vector quantization
Multiresolution source codes are data compression algorithms yielding embedded source descriptions. The decoder of a multiresolution code can build a source reproduction by decoding the embedded bit stream in part or in whole. All decoding procedures start at the beginning of the binary source description and decode some fraction of that string. Decoding a small portion of the binary string gives a low-resolution reproduction; decoding more yields a higher resolution reproduction; and so on. Multiresolution vector quantizers are block multiresolution source codes. This paper introduces algorithms for designing fixed- and variable-rate multiresolution vector quantizers. Experiments on synthetic data demonstrate performance close to the theoretical performance limit. Experiments on natural images demonstrate performance improvements of up to 8 dB over tree-structured vector quantizers. Some of the lessons learned through multiresolution vector quantizer design lend insight into the design of more sophisticated multiresolution codes
Vector quantization
During the past ten years Vector Quantization (VQ) has developed from a theoretical possibility promised by Shannon's source coding theorems into a powerful and competitive technique for speech and image coding and compression at medium to low bit rates. In this survey, the basic ideas behind the design of vector quantizers are sketched and some comments made on the state-of-the-art and current research efforts
Transcoding of MPEG Bitstreams
This paper discusses the problem of transcoding as it may occur in, for instance, the following situation. Suppose a satellite transmits an MPEG-compressed video signal at say 9 Mbit/s. This signal must be relayed at a cable head end. However, since the cable capacity is only limited, the cable head end will want to relay this incoming signal at a lower bit-rate of, say, 5 Mbit/s. The problem is how to convert a compressed video signal of a given bit-rate into a compressed video signal of a lower bit-rate. The specific transcoding problem discussed in this paper is referred to as bit-rate conversion. Basically, a transcoder used for such a purpose will consist of a cascaded decoder and encoder. It is shown in the paper that the complexity of this combination can be significantly reduced. The paper also investigates the loss of picture quality that may be expected when a transcoder is in the transmission chain. The loss of quality as compared to that resulting in the case of transmission without a transcoder is studied by means of computations using simplified models of the transmission chains and by means of using computer simulations of the complete transmission chain. It will be shown that the presence of two quantizers, i.e. cascaded quantization, in the transmission chain is the main cause of extra losses, and it will be shown that the losses in terms of SNR will be some 0.5 Âż 1.0 dB greater than in the case of a transmission chain without a transcoder
Distributed Functional Scalar Quantization Simplified
Distributed functional scalar quantization (DFSQ) theory provides optimality
conditions and predicts performance of data acquisition systems in which a
computation on acquired data is desired. We address two limitations of previous
works: prohibitively expensive decoder design and a restriction to sources with
bounded distributions. We rigorously show that a much simpler decoder has
equivalent asymptotic performance as the conditional expectation estimator
previously explored, thus reducing decoder design complexity. The simpler
decoder has the feature of decoupled communication and computation blocks.
Moreover, we extend the DFSQ framework with the simpler decoder to acquire
sources with infinite-support distributions such as Gaussian or exponential
distributions. Finally, through simulation results we demonstrate that
performance at moderate coding rates is well predicted by the asymptotic
analysis, and we give new insight on the rate of convergence
On Low-Resolution ADCs in Practical 5G Millimeter-Wave Massive MIMO Systems
Nowadays, millimeter-wave (mmWave) massive multiple-input multiple-output
(MIMO) systems is a favorable candidate for the fifth generation (5G) cellular
systems. However, a key challenge is the high power consumption imposed by its
numerous radio frequency (RF) chains, which may be mitigated by opting for
low-resolution analog-to-digital converters (ADCs), whilst tolerating a
moderate performance loss. In this article, we discuss several important issues
based on the most recent research on mmWave massive MIMO systems relying on
low-resolution ADCs. We discuss the key transceiver design challenges including
channel estimation, signal detector, channel information feedback and transmit
precoding. Furthermore, we introduce a mixed-ADC architecture as an alternative
technique of improving the overall system performance. Finally, the associated
challenges and potential implementations of the practical 5G mmWave massive
MIMO system {with ADC quantizers} are discussed.Comment: to appear in IEEE Communications Magazin
A mean-removed variation of weighted universal vector quantization for image coding
Weighted universal vector quantization uses traditional codeword design techniques to design locally optimal multi-codebook systems. Application of this technique to a sequence of medical images produces a 10.3 dB improvement over standard full search vector quantization followed by entropy coding at the cost of increased complexity. In this proposed variation each codebook in the system is given a mean or 'prediction' value which is subtracted from all supervectors that map to the given codebook. The chosen codebook's codewords are then used to encode the resulting residuals. Application of the mean-removed system to the medical data set achieves up to 0.5 dB improvement at no rate expense
- …