227 research outputs found
A Low-Complexity and Asymptotically Optimal Coding Strategy for Gaussian Vector Sources
In this paper, we present a low-complexity coding strategy to encode (compress) finite-length data blocks of Gaussian vector sources. We show that for large enough data blocks of a Gaussian asymptotically wide sense stationary (AWSS) vector source, the rate of the coding strategy tends to the lowest possible rate. Besides being a low-complexity strategy it does not require the knowledge of the correlation matrix of such data blocks. We also show that this coding strategy is appropriate to encode the most relevant Gaussian vector sources, namely, wide sense stationary (WSS), moving average (MA), autoregressive (AR), and ARMA vector sources
Rate-distortion function upper bounds for Gaussian vectors and their applications in coding AR sources
source coding; rate-distortion function (RDF); Gaussian vector; autoregressive (AR)
source; discrete Fourier transform (DFT
On the asymptotic optimality of a low-complexity coding strategy for WSS, MA, and AR vector sources
In this paper, we study the asymptotic optimality of a low-complexity coding strategy for
Gaussian vector sources. Specifically, we study the convergence speed of the rate of such a coding
strategy when it is used to encode the most relevant vector sources, namely wide sense stationary
(WSS), moving average (MA), and autoregressive (AR) vector sources. We also study how the coding
strategy considered performs when it is used to encode perturbed versions of those relevant sources.
More precisely, we give a sufficient condition for such perturbed versions so that the convergence
speed of the rate remains unaltered
Harnessing machine learning for fiber-induced nonlinearity mitigation in long-haul coherent optical OFDM
© 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).Coherent optical orthogonal frequency division multiplexing (CO-OFDM) has attracted a lot of interest in optical fiber communications due to its simplified digital signal processing (DSP) units, high spectral-efficiency, flexibility, and tolerance to linear impairments. However, CO-OFDM’s high peak-to-average power ratio imposes high vulnerability to fiber-induced non-linearities. DSP-based machine learning has been considered as a promising approach for fiber non-linearity compensation without sacrificing computational complexity. In this paper, we review the existing machine learning approaches for CO-OFDM in a common framework and review the progress in this area with a focus on practical aspects and comparison with benchmark DSP solutions.Peer reviewe
Systematic hybrid analog/digital signal coding
Thesis (Ph.D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2000.Includes bibliographical references (p. 201-206).This thesis develops low-latency, low-complexity signal processing solutions for systematic source coding, or source coding with side information at the decoder. We consider an analog source signal transmitted through a hybrid channel that is the composition of two channels: a noisy analog channel through which the source is sent unprocessed and a secondary rate-constrained digital channel; the source is processed prior to transmission through the digital channel. The challenge is to design a digital encoder and decoder that provide a minimum-distortion reconstruction of the source at the decoder, which has observations of analog and digital channel outputs. The methods described in this thesis have importance to a wide array of applications. For example, in the case of in-band on-channel (IBOC) digital audio broadcast (DAB), an existing noisy analog communications infrastructure may be augmented by a low-bandwidth digital side channel for improved fidelity, while compatibility with existing analog receivers is preserved. Another application is a source coding scheme which devotes a fraction of available bandwidth to the analog source and the rest of the bandwidth to a digital representation. This scheme is applicable in a wireless communications environment (or any environment with unknown SNR), where analog transmission has the advantage of a gentle roll-off of fidelity with SNR. A very general paradigm for low-latency, low-complexity source coding is composed of three basic cascaded elements: 1) a space rotation, or transformation, 2) quantization, and 3) lossless bitstream coding. The paradigm has been applied with great success to conventional source coding, and it applies equally well to systematic source coding. Focusing on the case involving a Gaussian source, Gaussian channel and mean-squared distortion, we determine optimal or near-optimal components for each of the three elements, each of which has analogous components in conventional source coding. The space rotation can take many forms such as linear block transforms, lapped transforms, or subband decomposition, all for which we derive conditions of optimality. For a very general case we develop algorithms for the design of locally optimal quantizers. For the Gaussian case, we describe a low-complexity scalar quantizer, the nested lattice scalar quantizer, that has performance very near that of the optimal systematic scalar quantizer. Analogous to entropy coding for conventional source coding, Slepian-Wolf coding is shown to be an effective lossless bitstream coding stage for systematic source coding.by Richard J. Barron.Ph.D
On distributed coding, quantization of channel measurements and faster-than-Nyquist signaling
This dissertation considers three different aspects of modern digital communication
systems and is therefore divided in three parts.
The first part is distributed coding. This part deals with source and source-
channel code design issues for digital communication systems with many transmitters
and one receiver or with one transmitter and one receiver but with side information at
the receiver, which is not available at the transmitter. Such problems are attracting
attention lately, as they constitute a way of extending the classical point-to-point
communication theory to networks. In this first part of this dissertation, novel source
and source-channel codes are designed by converting each of the considered distributed
coding problems into an equivalent classical channel coding or classical source-channel
coding problem. The proposed schemes come very close to the theoretical limits and
thus, are able to exhibit some of the gains predicted by network information theory.
In the other two parts of this dissertation classical point-to-point digital com-
munication systems are considered. The second part is quantization of coded chan-
nel measurements at the receiver. Quantization is a way to limit the accuracy of
continuous-valued measurements so that they can be processed in the digital domain.
Depending on the desired type of processing of the quantized data, different quantizer
design criteria should be used. In this second part of this dissertation, the quantized
received values from the channel are processed by the receiver, which tries to recover
the transmitted information. An exhaustive comparison of several quantization cri-
teria for this case are studied providing illuminating insight for this quantizer design
problem.
The third part of this dissertation is faster-than-Nyquist signaling. The Nyquist
rate in classical point-to-point bandwidth-limited digital communication systems is
considered as the maximum transmission rate or signaling rate and is equal to twice
the bandwidth of the channel. In this last part of the dissertation, we question this
Nyquist rate limitation by transmitting at higher signaling rates through the same
bandwidth. By mitigating the incurred interference due to the faster-than-Nyquist
rates, gains over Nyquist rate systems are obtained
- …