914 research outputs found
Beamforming in MISO Systems: Empirical Results and EVM-based Analysis
We present an analytical, simulation, and experimental-based study of
beamforming Multiple Input Single Output (MISO) systems. We analyze the
performance of beamforming MISO systems taking into account implementation
complexity and effects of imperfect channel estimate, delayed feedback, real
Radio Frequency (RF) hardware, and imperfect timing synchronization. Our
results show that efficient implementation of codebook-based beamforming MISO
systems with good performance is feasible in the presence of channel and
implementation-induced imperfections. As part of our study we develop a
framework for Average Error Vector Magnitude Squared (AEVMS)-based analysis of
beamforming MISO systems which facilitates comparison of analytical,
simulation, and experimental results on the same scale. In addition, AEVMS
allows fair comparison of experimental results obtained from different wireless
testbeds. We derive novel expressions for the AEVMS of beamforming MISO systems
and show how the AEVMS relates to important system characteristics like the
diversity gain, coding gain, and error floor.Comment: Submitted to IEEE Transactions on Wireless Communications, November
200
MIMO Transmission with Residual Transmit-RF Impairments
Physical transceiver implementations for multiple-input multiple-output
(MIMO) wireless communication systems suffer from transmit-RF (Tx-RF)
impairments. In this paper, we study the effect on channel capacity and
error-rate performance of residual Tx-RF impairments that defy proper
compensation. In particular, we demonstrate that such residual distortions
severely degrade the performance of (near-)optimum MIMO detection algorithms.
To mitigate this performance loss, we propose an efficient algorithm, which is
based on an i.i.d. Gaussian model for the distortion caused by these
impairments. In order to validate this model, we provide measurement results
based on a 4-stream Tx-RF chain implementation for MIMO orthogonal
frequency-division multiplexing (OFDM).Comment: to be presented at the International ITG Workshop on Smart Antennas -
WSA 201
An Overview on Application of Machine Learning Techniques in Optical Networks
Today's telecommunication networks have become sources of enormous amounts of
widely heterogeneous data. This information can be retrieved from network
traffic traces, network alarms, signal quality indicators, users' behavioral
data, etc. Advanced mathematical tools are required to extract meaningful
information from these data and take decisions pertaining to the proper
functioning of the networks from the network-generated data. Among these
mathematical tools, Machine Learning (ML) is regarded as one of the most
promising methodological approaches to perform network-data analysis and enable
automated network self-configuration and fault management. The adoption of ML
techniques in the field of optical communication networks is motivated by the
unprecedented growth of network complexity faced by optical networks in the
last few years. Such complexity increase is due to the introduction of a huge
number of adjustable and interdependent system parameters (e.g., routing
configurations, modulation format, symbol rate, coding schemes, etc.) that are
enabled by the usage of coherent transmission/reception technologies, advanced
digital signal processing and compensation of nonlinear effects in optical
fiber propagation. In this paper we provide an overview of the application of
ML to optical communications and networking. We classify and survey relevant
literature dealing with the topic, and we also provide an introductory tutorial
on ML for researchers and practitioners interested in this field. Although a
good number of research papers have recently appeared, the application of ML to
optical networks is still in its infancy: to stimulate further work in this
area, we conclude the paper proposing new possible research directions
- …