6,235 research outputs found
Nested turbo codes for the costa problem
Driven by applications in data-hiding, MIMO broadcast channel coding, precoding for interference cancellation, and transmitter cooperation in wireless networks, Costa coding has lately become a very active research area. In this paper, we first offer code design guidelines in terms of source- channel coding for algebraic binning. We then address practical code design based on nested lattice codes and propose nested turbo codes using turbo-like trellis-coded quantization (TCQ) for source coding and turbo trellis-coded modulation (TTCM) for channel coding. Compared to TCQ, turbo-like TCQ offers structural similarity between the source and channel coding components, leading to more efficient nesting with TTCM and better source coding performance. Due to the difference in effective dimensionality between turbo-like TCQ and TTCM, there is a performance tradeoff between these two components when they are nested together, meaning that the performance of turbo-like TCQ worsens as the TTCM code becomes stronger and vice versa. Optimization of this performance tradeoff leads to our code design that outperforms existing TCQ/TCM and TCQ/TTCM constructions and exhibits a gap of 0.94, 1.42 and 2.65 dB to the Costa capacity at 2.0, 1.0, and 0.5 bits/sample, respectively
Near-capacity dirty-paper code design : a source-channel coding approach
This paper examines near-capacity dirty-paper code designs based on source-channel coding. We first point out that the performance loss in signal-to-noise ratio (SNR) in our code designs can be broken into the sum of the packing loss from channel coding and a modulo loss, which is a function of the granular loss from source coding and the target dirty-paper coding rate (or SNR). We then examine practical designs by combining trellis-coded quantization (TCQ) with both systematic and nonsystematic irregular repeat-accumulate (IRA) codes. Like previous approaches, we exploit the extrinsic information transfer (EXIT) chart technique for capacity-approaching IRA code design; but unlike previous approaches, we emphasize the role of strong source coding to achieve as much granular gain as possible using TCQ. Instead of systematic doping, we employ two relatively shifted TCQ codebooks, where the shift is optimized (via tuning the EXIT charts) to facilitate the IRA code design. Our designs synergistically combine TCQ with IRA codes so that they work together as well as they do individually. By bringing together TCQ (the best quantizer from the source coding community) and EXIT chart-based IRA code designs (the best from the channel coding community), we are able to approach the theoretical limit of dirty-paper coding. For example, at 0.25 bit per symbol (b/s), our best code design (with 2048-state TCQ) performs only 0.630 dB away from the Shannon capacity
Graded quantization for multiple description coding of compressive measurements
Compressed sensing (CS) is an emerging paradigm for acquisition of compressed
representations of a sparse signal. Its low complexity is appealing for
resource-constrained scenarios like sensor networks. However, such scenarios
are often coupled with unreliable communication channels and providing robust
transmission of the acquired data to a receiver is an issue. Multiple
description coding (MDC) effectively combats channel losses for systems without
feedback, thus raising the interest in developing MDC methods explicitly
designed for the CS framework, and exploiting its properties. We propose a
method called Graded Quantization (CS-GQ) that leverages the democratic
property of compressive measurements to effectively implement MDC, and we
provide methods to optimize its performance. A novel decoding algorithm based
on the alternating directions method of multipliers is derived to reconstruct
signals from a limited number of received descriptions. Simulations are
performed to assess the performance of CS-GQ against other methods in presence
of packet losses. The proposed method is successful at providing robust coding
of CS measurements and outperforms other schemes for the considered test
metrics
State Amplification
We consider the problem of transmitting data at rate R over a state dependent
channel p(y|x,s) with the state information available at the sender and at the
same time conveying the information about the channel state itself to the
receiver. The amount of state information that can be learned at the receiver
is captured by the mutual information I(S^n; Y^n) between the state sequence
S^n and the channel output Y^n. The optimal tradeoff is characterized between
the information transmission rate R and the state uncertainty reduction rate
\Delta, when the state information is either causally or noncausally available
at the sender. This result is closely related and in a sense dual to a recent
study by Merhav and Shamai, which solves the problem of masking the state
information from the receiver rather than conveying it.Comment: 9 pages, 4 figures, submitted to IEEE Trans. Inform. Theory, revise
Implementation issues in source coding
An edge preserving image coding scheme which can be operated in both a lossy and a lossless manner was developed. The technique is an extension of the lossless encoding algorithm developed for the Mars observer spectral data. It can also be viewed as a modification of the DPCM algorithm. A packet video simulator was also developed from an existing modified packet network simulator. The coding scheme for this system is a modification of the mixture block coding (MBC) scheme described in the last report. Coding algorithms for packet video were also investigated
DMT Optimality of LR-Aided Linear Decoders for a General Class of Channels, Lattice Designs, and System Models
The work identifies the first general, explicit, and non-random MIMO
encoder-decoder structures that guarantee optimality with respect to the
diversity-multiplexing tradeoff (DMT), without employing a computationally
expensive maximum-likelihood (ML) receiver. Specifically, the work establishes
the DMT optimality of a class of regularized lattice decoders, and more
importantly the DMT optimality of their lattice-reduction (LR)-aided linear
counterparts. The results hold for all channel statistics, for all channel
dimensions, and most interestingly, irrespective of the particular lattice-code
applied. As a special case, it is established that the LLL-based LR-aided
linear implementation of the MMSE-GDFE lattice decoder facilitates DMT optimal
decoding of any lattice code at a worst-case complexity that grows at most
linearly in the data rate. This represents a fundamental reduction in the
decoding complexity when compared to ML decoding whose complexity is generally
exponential in rate.
The results' generality lends them applicable to a plethora of pertinent
communication scenarios such as quasi-static MIMO, MIMO-OFDM, ISI,
cooperative-relaying, and MIMO-ARQ channels, in all of which the DMT optimality
of the LR-aided linear decoder is guaranteed. The adopted approach yields
insight, and motivates further study, into joint transceiver designs with an
improved SNR gap to ML decoding.Comment: 16 pages, 1 figure (3 subfigures), submitted to the IEEE Transactions
on Information Theor
Communication and Interference Coordination
We study the problem of controlling the interference created to an external
observer by a communication processes. We model the interference in terms of
its type (empirical distribution), and we analyze the consequences of placing
constraints on the admissible type. Considering a single interfering link, we
characterize the communication-interference capacity region. Then, we look at a
scenario where the interference is jointly created by two users allowed to
coordinate their actions prior to transmission. In this case, the trade-off
involves communication and interference as well as coordination. We establish
an achievable communication-interference region and show that efficiency is
significantly improved by coordination
A robust coding scheme for packet video
A layered packet video coding algorithm based on a progressive transmission scheme is presented. The algorithm provides good compression and can handle significant packet loss with graceful degradation in the reconstruction sequence. Simulation results for various conditions are presented
Source-Channel Diversity for Parallel Channels
We consider transmitting a source across a pair of independent, non-ergodic
channels with random states (e.g., slow fading channels) so as to minimize the
average distortion. The general problem is unsolved. Hence, we focus on
comparing two commonly used source and channel encoding systems which
correspond to exploiting diversity either at the physical layer through
parallel channel coding or at the application layer through multiple
description source coding.
For on-off channel models, source coding diversity offers better performance.
For channels with a continuous range of reception quality, we show the reverse
is true. Specifically, we introduce a new figure of merit called the distortion
exponent which measures how fast the average distortion decays with SNR. For
continuous-state models such as additive white Gaussian noise channels with
multiplicative Rayleigh fading, optimal channel coding diversity at the
physical layer is more efficient than source coding diversity at the
application layer in that the former achieves a better distortion exponent.
Finally, we consider a third decoding architecture: multiple description
encoding with a joint source-channel decoding. We show that this architecture
achieves the same distortion exponent as systems with optimal channel coding
diversity for continuous-state channels, and maintains the the advantages of
multiple description systems for on-off channels. Thus, the multiple
description system with joint decoding achieves the best performance, from
among the three architectures considered, on both continuous-state and on-off
channels.Comment: 48 pages, 14 figure
Study of on-board compression of earth resources data
The current literature on image bandwidth compression was surveyed and those methods relevant to compression of multispectral imagery were selected. Typical satellite multispectral data was then analyzed statistically and the results used to select a smaller set of candidate bandwidth compression techniques particularly relevant to earth resources data. These were compared using both theoretical analysis and simulation, under various criteria of optimality such as mean square error (MSE), signal-to-noise ratio, classification accuracy, and computational complexity. By concatenating some of the most promising techniques, three multispectral data compression systems were synthesized which appear well suited to current and future NASA earth resources applications. The performance of these three recommended systems was then examined in detail by all of the above criteria. Finally, merits and deficiencies were summarized and a number of recommendations for future NASA activities in data compression proposed
- …