10,555 research outputs found
O (log log n) Worst-Case Local Decoding and Update Efficiency for Data Compression
This paper addresses the problem of data compression with local decoding and local update. A compression scheme has worst-case local decoding dwc if any bit of the raw file can be recovered by probing at most dwc bits of the compressed sequence, and has update efficiency of uwc if a single bit of the raw file can be updated by modifying at most uwc bits of the compressed sequence. This article provides an entropy-achieving compression scheme for memoryless sources that simultaneously achieves O (log log n) local decoding and update efficiency. Key to this achievability result is a novel succinct data structure for sparse sequences which allows efficient local decoding and local update.Under general assumptions on the local decoder and update algorithms, a converse result shows that the maximum of dwc and uwc must grow as (log log n). © 2020 IEEE
Statistical framework for video decoding complexity modeling and prediction
Video decoding complexity modeling and prediction is an increasingly important issue for efficient resource utilization in a variety of applications, including task scheduling, receiver-driven complexity shaping, and adaptive dynamic voltage scaling. In this paper we present a novel view of this problem based on a statistical framework perspective. We explore the statistical structure (clustering) of the execution time required by each video decoder module (entropy decoding, motion compensation, etc.) in conjunction with complexity features that are easily extractable at encoding time (representing the properties of each module's input source data). For this purpose, we employ Gaussian mixture models (GMMs) and an expectation-maximization algorithm to estimate the joint execution-time - feature probability density function (PDF). A training set of typical video sequences is used for this purpose in an offline estimation process. The obtained GMM representation is used in conjunction with the complexity features of new video sequences to predict the execution time required for the decoding of these sequences. Several prediction approaches are discussed and compared. The potential mismatch between the training set and new video content is addressed by adaptive online joint-PDF re-estimation. An experimental comparison is performed to evaluate the different approaches and compare the proposed prediction scheme with related resource prediction schemes from the literature. The usefulness of the proposed complexity-prediction approaches is demonstrated in an application of rate-distortion-complexity optimized decoding
An Iterative Receiver for OFDM With Sparsity-Based Parametric Channel Estimation
In this work we design a receiver that iteratively passes soft information
between the channel estimation and data decoding stages. The receiver
incorporates sparsity-based parametric channel estimation. State-of-the-art
sparsity-based iterative receivers simplify the channel estimation problem by
restricting the multipath delays to a grid. Our receiver does not impose such a
restriction. As a result it does not suffer from the leakage effect, which
destroys sparsity. Communication at near capacity rates in high SNR requires a
large modulation order. Due to the close proximity of modulation symbols in
such systems, the grid-based approximation is of insufficient accuracy. We show
numerically that a state-of-the-art iterative receiver with grid-based sparse
channel estimation exhibits a bit-error-rate floor in the high SNR regime. On
the contrary, our receiver performs very close to the perfect channel state
information bound for all SNR values. We also demonstrate both theoretically
and numerically that parametric channel estimation works well in dense
channels, i.e., when the number of multipath components is large and each
individual component cannot be resolved.Comment: Major revision, accepted for IEEE Transactions on Signal Processin
Malleable coding for updatable cloud caching
In software-as-a-service applications provisioned through cloud computing, locally cached data are often modified with updates from new versions. In some cases, with each edit, one may want to preserve both the original and new versions. In this paper, we focus on cases in which only the latest version must be preserved. Furthermore, it is desirable for the data to not only be compressed but to also be easily modified during updates, since representing information and modifying the representation both incur cost. We examine whether it is possible to have both compression efficiency and ease of alteration, in order to promote codeword reuse. In other words, we study the feasibility of a malleable and efficient coding scheme. The tradeoff between compression efficiency and malleability cost-the difficulty of synchronizing compressed versions-is measured as the length of a reused prefix portion. The region of achievable rates and malleability is found. Drawing from prior work on common information problems, we show that efficient data compression may not be the best engineering design principle when storing software-as-a-service data. In the general case, goals of efficiency and malleability are fundamentally in conflict.This work was supported in part by an NSF Graduate Research Fellowship (LRV), Grant CCR-0325774, and Grant CCF-0729069. This work was presented at the 2011 IEEE International Symposium on Information Theory [1] and the 2014 IEEE International Conference on Cloud Engineering [2]. The associate editor coordinating the review of this paper and approving it for publication was R. Thobaben. (CCR-0325774 - NSF Graduate Research Fellowship; CCF-0729069 - NSF Graduate Research Fellowship)Accepted manuscrip
Deep BCD-Net Using Identical Encoding-Decoding CNN Structures for Iterative Image Recovery
In "extreme" computational imaging that collects extremely undersampled or
noisy measurements, obtaining an accurate image within a reasonable computing
time is challenging. Incorporating image mapping convolutional neural networks
(CNN) into iterative image recovery has great potential to resolve this issue.
This paper 1) incorporates image mapping CNN using identical convolutional
kernels in both encoders and decoders into a block coordinate descent (BCD)
signal recovery method and 2) applies alternating direction method of
multipliers to train the aforementioned image mapping CNN. We refer to the
proposed recurrent network as BCD-Net using identical encoding-decoding CNN
structures. Numerical experiments show that, for a) denoising low
signal-to-noise-ratio images and b) extremely undersampled magnetic resonance
imaging, the proposed BCD-Net achieves significantly more accurate image
recovery, compared to BCD-Net using distinct encoding-decoding structures
and/or the conventional image recovery model using both wavelets and total
variation.Comment: 5 pages, 3 figure
Iterative Slepian-Wolf Decoding and FEC Decoding for Compress-and-Forward Systems
While many studies have concentrated on providing theoretical analysis for the relay assisted compress-and-forward systems little effort has yet been made to the construction and evaluation of a practical system. In this paper a practical CF system incorporating an error-resilient multilevel Slepian-Wolf decoder is introduced and a novel iterative processing structure which allows information exchanging between the Slepian-Wolf decoder and the forward error correction decoder of the main source message is proposed. In addition, a new quantization scheme is incorporated as well to avoid the complexity of the reconstruction of the relay signal at the final decoder of the destination. The results demonstrate that the iterative structure not only reduces the decoding loss of the Slepian-Wolf decoder, it also improves the decoding performance of the main message from the source
- …