17,326 research outputs found
Large-Scale MIMO Detection for 3GPP LTE: Algorithms and FPGA Implementations
Large-scale (or massive) multiple-input multiple-output (MIMO) is expected to
be one of the key technologies in next-generation multi-user cellular systems,
based on the upcoming 3GPP LTE Release 12 standard, for example. In this work,
we propose - to the best of our knowledge - the first VLSI design enabling
high-throughput data detection in single-carrier frequency-division multiple
access (SC-FDMA)-based large-scale MIMO systems. We propose a new approximate
matrix inversion algorithm relying on a Neumann series expansion, which
substantially reduces the complexity of linear data detection. We analyze the
associated error, and we compare its performance and complexity to those of an
exact linear detector. We present corresponding VLSI architectures, which
perform exact and approximate soft-output detection for large-scale MIMO
systems with various antenna/user configurations. Reference implementation
results for a Xilinx Virtex-7 XC7VX980T FPGA show that our designs are able to
achieve more than 600 Mb/s for a 128 antenna, 8 user 3GPP LTE-based large-scale
MIMO system. We finally provide a performance/complexity trade-off comparison
using the presented FPGA designs, which reveals that the detector circuit of
choice is determined by the ratio between BS antennas and users, as well as the
desired error-rate performance.Comment: To appear in the IEEE Journal of Selected Topics in Signal Processin
A hybrid neuro--wavelet predictor for QoS control and stability
For distributed systems to properly react to peaks of requests, their
adaptation activities would benefit from the estimation of the amount of
requests. This paper proposes a solution to produce a short-term forecast based
on data characterising user behaviour of online services. We use \emph{wavelet
analysis}, providing compression and denoising on the observed time series of
the amount of past user requests; and a \emph{recurrent neural network} trained
with observed data and designed so as to provide well-timed estimations of
future requests. The said ensemble has the ability to predict the amount of
future user requests with a root mean squared error below 0.06\%. Thanks to
prediction, advance resource provision can be performed for the duration of a
request peak and for just the right amount of resources, hence avoiding
over-provisioning and associated costs. Moreover, reliable provision lets users
enjoy a level of availability of services unaffected by load variations
Random Beamforming with Heterogeneous Users and Selective Feedback: Individual Sum Rate and Individual Scaling Laws
This paper investigates three open problems in random beamforming based
communication systems: the scheduling policy with heterogeneous users, the
closed form sum rate, and the randomness of multiuser diversity with selective
feedback. By employing the cumulative distribution function based scheduling
policy, we guarantee fairness among users as well as obtain multiuser diversity
gain in the heterogeneous scenario. Under this scheduling framework, the
individual sum rate, namely the average rate for a given user multiplied by the
number of users, is of interest and analyzed under different feedback schemes.
Firstly, under the full feedback scheme, we derive the closed form individual
sum rate by employing a decomposition of the probability density function of
the selected user's signal-to-interference-plus-noise ratio. This technique is
employed to further obtain a closed form rate approximation with selective
feedback in the spatial dimension. The analysis is also extended to random
beamforming in a wideband OFDMA system with additional selective feedback in
the spectral dimension wherein only the best beams for the best-L resource
blocks are fed back. We utilize extreme value theory to examine the randomness
of multiuser diversity incurred by selective feedback. Finally, by leveraging
the tail equivalence method, the multiplicative effect of selective feedback
and random observations is observed to establish the individual rate scaling.Comment: Submitted in March 2012. To appear in IEEE Transactions on Wireless
Communications. Part of this paper builds upon the following letter: Y. Huang
and B. D. Rao, "Closed form sum rate of random beamforming", IEEE Commun.
Lett., vol. 16, no. 5, pp. 630-633, May 201
Efficient DSP and Circuit Architectures for Massive MIMO: State-of-the-Art and Future Directions
Massive MIMO is a compelling wireless access concept that relies on the use
of an excess number of base-station antennas, relative to the number of active
terminals. This technology is a main component of 5G New Radio (NR) and
addresses all important requirements of future wireless standards: a great
capacity increase, the support of many simultaneous users, and improvement in
energy efficiency. Massive MIMO requires the simultaneous processing of signals
from many antenna chains, and computational operations on large matrices. The
complexity of the digital processing has been viewed as a fundamental obstacle
to the feasibility of Massive MIMO in the past. Recent advances on
system-algorithm-hardware co-design have led to extremely energy-efficient
implementations. These exploit opportunities in deeply-scaled silicon
technologies and perform partly distributed processing to cope with the
bottlenecks encountered in the interconnection of many signals. For example,
prototype ASIC implementations have demonstrated zero-forcing precoding in real
time at a 55 mW power consumption (20 MHz bandwidth, 128 antennas, multiplexing
of 8 terminals). Coarse and even error-prone digital processing in the antenna
paths permits a reduction of consumption with a factor of 2 to 5. This article
summarizes the fundamental technical contributions to efficient digital signal
processing for Massive MIMO. The opportunities and constraints on operating on
low-complexity RF and analog hardware chains are clarified. It illustrates how
terminals can benefit from improved energy efficiency. The status of technology
and real-life prototypes discussed. Open challenges and directions for future
research are suggested.Comment: submitted to IEEE transactions on signal processin
Outlier detection techniques for wireless sensor networks: A survey
In the field of wireless sensor networks, those measurements that significantly deviate from the normal pattern of sensed data are considered as outliers. The potential sources of outliers include noise and errors, events, and malicious attacks on the network. Traditional outlier detection techniques are not directly applicable to wireless sensor networks due to the nature of sensor data and specific requirements and limitations of the wireless sensor networks. This survey provides a comprehensive overview of existing outlier detection techniques specifically developed for the wireless sensor networks. Additionally, it presents a technique-based taxonomy and a comparative table to be used as a guideline to select a technique suitable for the application at hand based on characteristics such as data type, outlier type, outlier identity, and outlier degree
Water Pipeline Leakage Detection Based on Machine Learning and Wireless Sensor Networks
The detection of water pipeline leakage is important to ensure that water supply networks can operate safely and conserve water resources. To address the lack of intelligent and the low efficiency of conventional leakage detection methods, this paper designs a leakage detection method based on machine learning and wireless sensor networks (WSNs). The system employs wireless sensors installed on pipelines to collect data and utilizes the 4G network to perform remote data transmission. A leakage triggered networking method is proposed to reduce the wireless sensor network’s energy consumption and prolong the system life cycle effectively. To enhance the precision and intelligence of leakage detection, we propose a leakage identification method that employs the intrinsic mode function, approximate entropy, and principal component analysis to construct a signal feature set and that uses a support vector machine (SVM) as a classifier to perform leakage detection. Simulation analysis and experimental results indicate that the proposed leakage identification method can effectively identify the water pipeline leakage and has lower energy consumption than the networking methods used in conventional wireless sensor networks
- …