57 research outputs found
Low ML-Decoding Complexity, Large Coding Gain, Full-Rate, Full-Diversity STBCs for 2 X 2 and 4 X 2 MIMO Systems
This paper (Part of the content of this manuscript has been accepted for
presentation in IEEE Globecom 2008, to be held in New Orleans) deals with low
maximum likelihood (ML) decoding complexity, full-rate and full-diversity
space-time block codes (STBCs), which also offer large coding gain, for the 2
transmit antenna, 2 receive antenna () and the 4 transmit antenna, 2
receive antenna () MIMO systems. Presently, the best known STBC for
the system is the Golden code and that for the system is
the DjABBA code. Following the approach by Biglieri, Hong and Viterbo, a new
STBC is presented in this paper for the system. This code matches
the Golden code in performance and ML-decoding complexity for square QAM
constellations while it has lower ML-decoding complexity with the same
performance for non-rectangular QAM constellations. This code is also shown to
be \emph{information-lossless} and \emph{diversity-multiplexing gain} (DMG)
tradeoff optimal. This design procedure is then extended to the
system and a code, which outperforms the DjABBA code for QAM constellations
with lower ML-decoding complexity, is presented. So far, the Golden code has
been reported to have an ML-decoding complexity of the order of for
square QAM of size . In this paper, a scheme that reduces its ML-decoding
complexity to is presented.Comment: 28 pages, 5 figures, 3 tables, submitted to IEEE Journal of Selected
Topics in Signal Processin
Generalized Silver Codes
For an transmit, receive antenna system (
system), a {\it{full-rate}} space time block code (STBC) transmits complex symbols per channel use. The well known Golden code is an
example of a full-rate, full-diversity STBC for 2 transmit antennas. Its
ML-decoding complexity is of the order of for square -QAM. The
Silver code for 2 transmit antennas has all the desirable properties of the
Golden code except its coding gain, but offers lower ML-decoding complexity of
the order of . Importantly, the slight loss in coding gain is negligible
compared to the advantage it offers in terms of lowering the ML-decoding
complexity. For higher number of transmit antennas, the best known codes are
the Perfect codes, which are full-rate, full-diversity, information lossless
codes (for ) but have a high ML-decoding complexity of the order
of (for , the punctured Perfect codes are
considered). In this paper, a scheme to obtain full-rate STBCs for
transmit antennas and any with reduced ML-decoding complexity of the
order of , is presented. The codes constructed are
also information lossless for , like the Perfect codes and allow
higher mutual information than the comparable punctured Perfect codes for . These codes are referred to as the {\it generalized Silver codes},
since they enjoy the same desirable properties as the comparable Perfect codes
(except possibly the coding gain) with lower ML-decoding complexity, analogous
to the Silver-Golden codes for 2 transmit antennas. Simulation results of the
symbol error rates for 4 and 8 transmit antennas show that the generalized
Silver codes match the punctured Perfect codes in error performance while
offering lower ML-decoding complexity.Comment: Accepted for publication in the IEEE Transactions on Information
Theory. This revised version has 30 pages, 7 figures and Section III has been
completely revise
A Low-Complexity, Full-Rate, Full-Diversity 2 X 2 STBC with Golden Code's Coding Gain
This paper presents a low-ML-decoding-complexity, full-rate, full-diversity
space-time block code (STBC) for a 2 transmit antenna, 2 receive antenna
multiple-input multiple-output (MIMO) system, with coding gain equal to that of
the best and well known Golden code for any QAM constellation. Recently, two
codes have been proposed (by Paredes, Gershman and Alkhansari and by Sezginer
and Sari), which enjoy a lower decoding complexity relative to the Golden code,
but have lesser coding gain. The STBC presented in this paper has
lesser decoding complexity for non-square QAM constellations, compared with
that of the Golden code, while having the same decoding complexity for square
QAM constellations. Compared with the Paredes-Gershman-Alkhansari and
Sezginer-Sari codes, the proposed code has the same decoding complexity for
non-rectangular QAM constellations. Simulation results, which compare the
codeword error rate (CER) performance, are presented.Comment: Submitted to IEEE Globecom - 2008. 6 pages, 3 figures, 1 tabl
Maximum Rate of Unitary-Weight, Single-Symbol Decodable STBCs
It is well known that the Space-time Block Codes (STBCs) from Complex
orthogonal designs (CODs) are single-symbol decodable/symbol-by-symbol
decodable (SSD). The weight matrices of the square CODs are all unitary and
obtainable from the unitary matrix representations of Clifford Algebras when
the number of transmit antennas is a power of 2. The rate of the square
CODs for has been shown to be complex symbols per
channel use. However, SSD codes having unitary-weight matrices need not be
CODs, an example being the Minimum-Decoding-Complexity STBCs from
Quasi-Orthogonal Designs. In this paper, an achievable upper bound on the rate
of any unitary-weight SSD code is derived to be complex
symbols per channel use for antennas, and this upper bound is larger than
that of the CODs. By way of code construction, the interrelationship between
the weight matrices of unitary-weight SSD codes is studied. Also, the coding
gain of all unitary-weight SSD codes is proved to be the same for QAM
constellations and conditions that are necessary for unitary-weight SSD codes
to achieve full transmit diversity and optimum coding gain are presented.Comment: accepted for publication in the IEEE Transactions on Information
Theory, 9 pages, 1 figure, 1 Tabl
Recommended from our members
Empirical Bayes Estimators for High-Dimensional Sparse Vectors
The problem of estimating a high-dimensional sparse vector from an observation in i.i.d. Gaussian noise is considered. The performance is measured using squared-error loss. An empirical Bayes shrinkage estimator, derived using a Bernoulli-Gaussian prior, is analyzed and compared with the well-known soft-thresholding estimator. We obtain concentration inequalities for the Stein's unbiased risk estimate and the loss function of both estimators. The results show that for large , both the risk estimate and the loss function concentrate on deterministic values close to the true risk.
Depending on the underlying , either the proposed empirical Bayes (eBayes) estimator or soft-thresholding may have smaller loss. We consider a hybrid estimator that attempts to pick the better of the soft-thresholding estimator and the eBayes estimator by comparing their risk estimates. It is shown that: i) the loss of the hybrid estimator concentrates on the minimum of the losses of the two competing estimators, and ii) the risk of the hybrid estimator is within order of the minimum of the two risks. Simulation results are provided to support the theoretical results. Finally, we use the eBayes and hybrid estimators as denoisers in the approximate message passing (AMP) algorithm for compressed sensing, and show that their performance is superior to the soft-thresholding denoiser in a wide range of settings.This work was supported in part by a Marie Curie Career Integration Grant (Grant Agreement Number 631489), an Isaac Newton Trust Research Grant, and EPSRC Grant EP/N013999/1
Cluster-Seeking James-Stein Estimators
This paper considers the problem of estimating a high-dimensional vector of
parameters from a noisy observation. The
noise vector is i.i.d. Gaussian with known variance. For a squared-error loss
function, the James-Stein (JS) estimator is known to dominate the simple
maximum-likelihood (ML) estimator when the dimension exceeds two. The
JS-estimator shrinks the observed vector towards the origin, and the risk
reduction over the ML-estimator is greatest for that lie
close to the origin. JS-estimators can be generalized to shrink the data
towards any target subspace. Such estimators also dominate the ML-estimator,
but the risk reduction is significant only when lies
close to the subspace. This leads to the question: in the absence of prior
information about , how do we design estimators that give
significant risk reduction over the ML-estimator for a wide range of
?
In this paper, we propose shrinkage estimators that attempt to infer the
structure of from the observed data in order to construct
a good attracting subspace. In particular, the components of the observed
vector are separated into clusters, and the elements in each cluster shrunk
towards a common attractor. The number of clusters and the attractor for each
cluster are determined from the observed vector. We provide concentration
results for the squared-error loss and convergence results for the risk of the
proposed estimators. The results show that the estimators give significant risk
reduction over the ML-estimator for a wide range of ,
particularly for large . Simulation results are provided to support the
theoretical claims.Marie Curie Career Integration Grant; 10.13039/501100004815-Early Career Grant from the Isaac Newton Trus
Empirical Bayes Estimators for Sparse Sequences.
The problem of estimating a high-dimensional sparse vector θ ∈ ℝ n from an observation in i.i.d. Gaussian noise is considered. An empirical Bayes shrinkage estimator, derived using a Bernoulli-Gaussian prior, is analyzed and compared with the well-known soft-thresholding estimator using squared-error loss as a measure of performance. We obtain concentration inequalities for the Stein's unbiased risk estimate and the loss function of both estimators. Depending on the underlying θ, either the proposed empirical Bayes (eBayes) estimator or soft-thresholding may have smaller loss. We consider a hybrid estimator that attempts to pick the better of the soft-thresholding estimator and the eBayes estimator by comparing their risk estimates. It is shown that: i) the loss of the hybrid estimator concentrates on the minimum of the losses of the two competing estimators, and ii) the risk of the hybrid estimator is within order 1/√n of the minimum of the two risks. Simulation results are provided to support the theoretical results
- …