594 research outputs found
A General Framework for Transmission with Transceiver Distortion and Some Applications
A general theoretical framework is presented for analyzing information
transmission over Gaussian channels with memoryless transceiver distortion,
which encompasses various nonlinear distortion models including transmit-side
clipping, receive-side analog-to-digital conversion, and others. The framework
is based on the so-called generalized mutual information (GMI), and the
analysis in particular benefits from the setup of Gaussian codebook ensemble
and nearest-neighbor decoding, for which it is established that the GMI takes a
general form analogous to the channel capacity of undistorted Gaussian
channels, with a reduced "effective" signal-to-noise ratio (SNR) that depends
on the nominal SNR and the distortion model. When applied to specific
distortion models, an array of results of engineering relevance is obtained.
For channels with transmit-side distortion only, it is shown that a
conventional approach, which treats the distorted signal as the sum of the
original signal part and a uncorrelated distortion part, achieves the GMI. For
channels with output quantization, closed-form expressions are obtained for the
effective SNR and the GMI, and related optimization problems are formulated and
solved for quantizer design. Finally, super-Nyquist sampling is analyzed within
the general framework, and it is shown that sampling beyond the Nyquist rate
increases the GMI for all SNR. For example, with a binary symmetric output
quantization, information rates exceeding one bit per channel use are
achievable by sampling the output at four times the Nyquist rate.Comment: 32 pages (including 4 figures, 5 tables, and auxiliary materials);
submitted to IEEE Transactions on Communication
On Low-Resolution ADCs in Practical 5G Millimeter-Wave Massive MIMO Systems
Nowadays, millimeter-wave (mmWave) massive multiple-input multiple-output
(MIMO) systems is a favorable candidate for the fifth generation (5G) cellular
systems. However, a key challenge is the high power consumption imposed by its
numerous radio frequency (RF) chains, which may be mitigated by opting for
low-resolution analog-to-digital converters (ADCs), whilst tolerating a
moderate performance loss. In this article, we discuss several important issues
based on the most recent research on mmWave massive MIMO systems relying on
low-resolution ADCs. We discuss the key transceiver design challenges including
channel estimation, signal detector, channel information feedback and transmit
precoding. Furthermore, we introduce a mixed-ADC architecture as an alternative
technique of improving the overall system performance. Finally, the associated
challenges and potential implementations of the practical 5G mmWave massive
MIMO system {with ADC quantizers} are discussed.Comment: to appear in IEEE Communications Magazin
Design of a digital compression technique for shuttle television
The determination of the performance and hardware complexity of data compression algorithms applicable to color television signals, were studied to assess the feasibility of digital compression techniques for shuttle communications applications. For return link communications, it is shown that a nonadaptive two dimensional DPCM technique compresses the bandwidth of field-sequential color TV to about 13 MBPS and requires less than 60 watts of secondary power. For forward link communications, a facsimile coding technique is recommended which provides high resolution slow scan television on a 144 KBPS channel. The onboard decoder requires about 19 watts of secondary power
A vector quantization approach to universal noiseless coding and quantization
A two-stage code is a block code in which each block of data is coded in two stages: the first stage codes the identity of a block code among a collection of codes, and the second stage codes the data using the identified code. The collection of codes may be noiseless codes, fixed-rate quantizers, or variable-rate quantizers. We take a vector quantization approach to two-stage coding, in which the first stage code can be regarded as a vector quantizer that “quantizes” the input data of length n to one of a fixed collection of block codes. We apply the generalized Lloyd algorithm to the first-stage quantizer, using induced measures of rate and distortion, to design locally optimal two-stage codes. On a source of medical images, two-stage variable-rate vector quantizers designed in this way outperform standard (one-stage) fixed-rate vector quantizers by over 9 dB. The tail of the operational distortion-rate function of the first-stage quantizer determines the optimal rate of convergence of the redundancy of a universal sequence of two-stage codes. We show that there exist two-stage universal noiseless codes, fixed-rate quantizers, and variable-rate quantizers whose per-letter rate and distortion redundancies converge to zero as (k/2)n -1 log n, when the universe of sources has finite dimension k. This extends the achievability part of Rissanen's theorem from universal noiseless codes to universal quantizers. Further, we show that the redundancies converge as O(n-1) when the universe of sources is countable, and as O(n-1+ϵ) when the universe of sources is infinite-dimensional, under appropriate conditions
Differential encoding techniques applied to speech signals
The increasing use of digital communication systems has
produced a continuous search for efficient methods of speech
encoding.
This thesis describes investigations of novel differential
encoding systems. Initially Linear First Order DPCM systems
employing a simple delayed encoding algorithm are examined.
The systems detect an overload condition in the encoder, and
through a simple algorithm reduce the overload noise at the
expense of some increase in the quantization (granular) noise.
The signal-to-noise ratio (snr) performance of such d codec has
1 to 2 dB's advantage compared to the First Order Linear DPCM
system.
In order to obtain a large improvement in snr the high
correlation between successive pitch periods as well as the
correlation between successive samples in the voiced speech
waveform is exploited. A system called "Pitch Synchronous
First Order DPCM" (PSFOD) has been developed. Here the difference
Sequence formed between the samples of the input sequence in the
current pitch period and the samples of the stored decoded
sequence from the previous pitch period are encoded. This
difference sequence has a smaller dynamic range than the original
input speech sequence enabling a quantizer with better resolution
to be used for the same transmission bit rate. The snr is increased
by 6 dB compared with the peak snr of a First Order DPCM codea.
A development of the PSFOD system called a Pitch Synchronous
Differential Predictive Encoding system (PSDPE) is next investigated.
The principle of its operation is to predict the next sample in
the voiced-speech waveform, and form the prediction error which
is then subtracted from the corresponding decoded prediction
error in the previous pitch period. The difference is then
encoded and transmitted. The improvement in snr is approximately
8 dB compared to an ADPCM codea, when the PSDPE system uses an
adaptive PCM encoder. The snr of the system increases further
when the efficiency of the predictors used improve. However,
the performance of a predictor in any differential system is
closely related to the quantizer used. The better the quantization
the more information is available to the predictor and the better
the prediction of the incoming speech samples. This leads
automatically to the investigation in techniques of efficient
quantization. A novel adaptive quantization technique called
Dynamic Ratio quantizer (DRQ) is then considered and its theory
presented. The quantizer uses an adaptive non-linear element
which transforms the input samples of any amplitude to samples
within a defined amplitude range. A fixed uniform quantizer
quantizes the transformed signal. The snr for this quantizer
is almost constant over a range of input power limited in practice
by the dynamia range of the adaptive non-linear element, and it
is 2 to 3 dB's better than the snr of a One Word Memory adaptive
quantizer.
Digital computer simulation techniques have been used widely
in the above investigations and provide the necessary experimental
flexibility. Their use is described in the text
Distributed Beamforming in Wireless Multiuser Relay-Interference Networks with Quantized Feedback
We study quantized beamforming in wireless amplify-and-forward
relay-interference networks with any number of transmitters, relays, and
receivers. We design the quantizer of the channel state information to minimize
the probability that at least one receiver incorrectly decodes its desired
symbol(s). Correspondingly, we introduce a generalized diversity measure that
encapsulates the conventional one as the first-order diversity. Additionally,
it incorporates the second-order diversity, which is concerned with the
transmitter power dependent logarithmic terms that appear in the error rate
expression. First, we show that, regardless of the quantizer and the amount of
feedback that is used, the relay-interference network suffers a second-order
diversity loss compared to interference-free networks. Then, two different
quantization schemes are studied: First, using a global quantizer, we show that
a simple relay selection scheme can achieve maximal diversity. Then, using the
localization method, we construct both fixed-length and variable-length local
(distributed) quantizers (fLQs and vLQs). Our fLQs achieve maximal first-order
diversity, whereas our vLQs achieve maximal diversity. Moreover, we show that
all the promised diversity and array gains can be obtained with arbitrarily low
feedback rates when the transmitter powers are sufficiently large. Finally, we
confirm our analytical findings through simulations.Comment: 41 pages, 14 figures, submitted to IEEE Transactions on Information
Theory, July 2010. This work was presented in part at IEEE Global
Communications Conference (GLOBECOM), Nov. 200
Recommended from our members
Finite state machine representation of digital signal processing systems
A new method for implementing digital filters is discussed. The met11od maximises the output signal to noise ratio of a filter by assigning at each of the filter variables an optimal quantization law. A filter optimised for a gaussian process is considered in detail. An error model is developed and applied to first and second order canonic form filter sections. Comparisons are drawn between the gaussian optimised filter and the equivalent fixed point arithmetic filter. The performance of gaussian optimised filters under sinusoidal input signal conditions is considered ; it is found that the gaussian optimised filter exhibits a lower approximation error than the equivalent fixed point arithmetic filter. It is shown that when high order filters are implemented as a cascade of second order sections - with if necessary one first order section - the section ordering has a very small effect on the overall signal to noise r atio performance. A similar result for the pairing of poles and zeroes is found. Bounds on the maximum limit cycle amplitude for first and second order all-pole sections are presented. It is shown that for a first order all-pole the maximum limit cycle amplitude is lower than would be expected in the equivalent fixed point arithmetic filter, whereas , for the second order all- pole the bound is twice as large. Examples of a low-pass , band-pass and wideband differentiating filter,designed using free quantization law techniques,are presented. This new design method leads to a filter whose arithmetic operations can not be performed using fixed point arithmetic hardware. Instead, the filter must be represented as a finite state machine and then implemented using sequential logic circuit synthesis techniques. The logic complexity is found to depend - amongst other considerations - on the so called state (code) assignment. Some preliminary results on this problem are presented for the case of a next state function computed using the AND/EXCLUSIVE- OR (ring-sum) logic expansion. A review of the state assignment techniques in the literature is included. A part of the state assignment problem - for the case of AND/EX'·/OR logic - requires the numerous and consequently rapid computation of the Reed-Muller Transformation. A hardware processor - designed as an add-on to a minicomputer - is described; speed comparisons are drawn with the equivalent software algorithm.Digitisation of this thesis was sponsored by Arcadia Fund, a charitable fund of Lisbet Rausing and Peter Baldwin
- …