17,175 research outputs found
Compressive sensing based Bayesian sparse channel estimation for OFDM communication systems: high performance and low complexity
In orthogonal frequency division modulation (OFDM) communication systems,
channel state information (CSI) is required at receiver due to the fact that
frequency-selective fading channel leads to disgusting inter-symbol
interference (ISI) over data transmission. Broadband channel model is often
described by very few dominant channel taps and they can be probed by
compressive sensing based sparse channel estimation (SCE) methods, e.g.,
orthogonal matching pursuit algorithm, which can take the advantage of sparse
structure effectively in the channel as for prior information. However, these
developed methods are vulnerable to both noise interference and column
coherence of training signal matrix. In other words, the primary objective of
these conventional methods is to catch the dominant channel taps without a
report of posterior channel uncertainty. To improve the estimation performance,
we proposed a compressive sensing based Bayesian sparse channel estimation
(BSCE) method which can not only exploit the channel sparsity but also mitigate
the unexpected channel uncertainty without scarifying any computational
complexity. The propose method can reveal potential ambiguity among multiple
channel estimators that are ambiguous due to observation noise or correlation
interference among columns in the training matrix. Computer simulations show
that propose method can improve the estimation performance when comparing with
conventional SCE methods.Comment: 24 pages,16 figures, submitted for a journa
Calculation of the Performance of Communication Systems from Measured Oscillator Phase Noise
Oscillator phase noise (PN) is one of the major problems that affect the
performance of communication systems. In this paper, a direct connection
between oscillator measurements, in terms of measured single-side band PN
spectrum, and the optimal communication system performance, in terms of the
resulting error vector magnitude (EVM) due to PN, is mathematically derived and
analyzed. First, a statistical model of the PN, considering the effect of white
and colored noise sources, is derived. Then, we utilize this model to derive
the modified Bayesian Cramer-Rao bound on PN estimation, and use it to find an
EVM bound for the system performance. Based on our analysis, it is found that
the influence from different noise regions strongly depends on the
communication bandwidth, i.e., the symbol rate. For high symbol rate
communication systems, cumulative PN that appears near carrier is of relatively
low importance compared to the white PN far from carrier. Our results also show
that 1/f^3 noise is more predictable compared to 1/f^2 noise and in a fair
comparison it affects the performance less.Comment: Accepted in IEEE Transactions on Circuits and Systems-I: Regular
Paper
Modulation Classification for MIMO-OFDM Signals via Approximate Bayesian Inference
The problem of modulation classification for a multiple-antenna (MIMO) system
employing orthogonal frequency division multiplexing (OFDM) is investigated
under the assumption of unknown frequency-selective fading channels and
signal-to-noise ratio (SNR). The classification problem is formulated as a
Bayesian inference task, and solutions are proposed based on Gibbs sampling and
mean field variational inference. The proposed methods rely on a selection of
the prior distributions that adopts a latent Dirichlet model for the modulation
type and on the Bayesian network formalism. The Gibbs sampling method converges
to the optimal Bayesian solution and, using numerical results, its accuracy is
seen to improve for small sample sizes when switching to the mean field
variational inference technique after a number of iterations. The speed of
convergence is shown to improve via annealing and random restarts. While most
of the literature on modulation classification assume that the channels are
flat fading, that the number of receive antennas is no less than that of
transmit antennas, and that a large number of observed data symbols are
available, the proposed methods perform well under more general conditions.
Finally, the proposed Bayesian methods are demonstrated to improve over
existing non-Bayesian approaches based on independent component analysis and on
prior Bayesian methods based on the `superconstellation' method.Comment: To be appear in IEEE Trans. Veh. Technolog
Classical and Bayesian Linear Data Estimators for Unique Word OFDM
Unique word - orthogonal frequency division multiplexing (UW-OFDM) is a novel
OFDM signaling concept, where the guard interval is built of a deterministic
sequence - the so-called unique word - instead of the conventional random
cyclic prefix. In contrast to previous attempts with deterministic sequences in
the guard interval the addressed UW-OFDM signaling approach introduces
correlations between the subcarrier symbols, which can be exploited by the
receiver in order to improve the bit error ratio performance. In this paper we
develop several linear data estimators specifically designed for UW-OFDM, some
based on classical and some based on Bayesian estimation theory. Furthermore,
we derive complexity optimized versions of these estimators, and we study their
individual complex multiplication count in detail. Finally, we evaluate the
estimators' performance for the additive white Gaussian noise channel as well
as for selected indoor multipath channel scenarios.Comment: Preprint, 13 page
An Overview on Application of Machine Learning Techniques in Optical Networks
Today's telecommunication networks have become sources of enormous amounts of
widely heterogeneous data. This information can be retrieved from network
traffic traces, network alarms, signal quality indicators, users' behavioral
data, etc. Advanced mathematical tools are required to extract meaningful
information from these data and take decisions pertaining to the proper
functioning of the networks from the network-generated data. Among these
mathematical tools, Machine Learning (ML) is regarded as one of the most
promising methodological approaches to perform network-data analysis and enable
automated network self-configuration and fault management. The adoption of ML
techniques in the field of optical communication networks is motivated by the
unprecedented growth of network complexity faced by optical networks in the
last few years. Such complexity increase is due to the introduction of a huge
number of adjustable and interdependent system parameters (e.g., routing
configurations, modulation format, symbol rate, coding schemes, etc.) that are
enabled by the usage of coherent transmission/reception technologies, advanced
digital signal processing and compensation of nonlinear effects in optical
fiber propagation. In this paper we provide an overview of the application of
ML to optical communications and networking. We classify and survey relevant
literature dealing with the topic, and we also provide an introductory tutorial
on ML for researchers and practitioners interested in this field. Although a
good number of research papers have recently appeared, the application of ML to
optical networks is still in its infancy: to stimulate further work in this
area, we conclude the paper proposing new possible research directions
- …