2,104 research outputs found

    Deep Reinforcement Learning for Real-Time Optimization in NB-IoT Networks

    Get PDF
    NarrowBand-Internet of Things (NB-IoT) is an emerging cellular-based technology that offers a range of flexible configurations for massive IoT radio access from groups of devices with heterogeneous requirements. A configuration specifies the amount of radio resource allocated to each group of devices for random access and for data transmission. Assuming no knowledge of the traffic statistics, there exists an important challenge in "how to determine the configuration that maximizes the long-term average number of served IoT devices at each Transmission Time Interval (TTI) in an online fashion". Given the complexity of searching for optimal configuration, we first develop real-time configuration selection based on the tabular Q-learning (tabular-Q), the Linear Approximation based Q-learning (LA-Q), and the Deep Neural Network based Q-learning (DQN) in the single-parameter single-group scenario. Our results show that the proposed reinforcement learning based approaches considerably outperform the conventional heuristic approaches based on load estimation (LE-URC) in terms of the number of served IoT devices. This result also indicates that LA-Q and DQN can be good alternatives for tabular-Q to achieve almost the same performance with much less training time. We further advance LA-Q and DQN via Actions Aggregation (AA-LA-Q and AA-DQN) and via Cooperative Multi-Agent learning (CMA-DQN) for the multi-parameter multi-group scenario, thereby solve the problem that Q-learning agents do not converge in high-dimensional configurations. In this scenario, the superiority of the proposed Q-learning approaches over the conventional LE-URC approach significantly improves with the increase of configuration dimensions, and the CMA-DQN approach outperforms the other approaches in both throughput and training efficiency

    Achieving "Massive MIMO" Spectral Efficiency with a Not-so-Large Number of Antennas

    Full text link
    The main focus and contribution of this paper is a novel network-MIMO TDD architecture that achieves spectral efficiencies comparable with "Massive MIMO", with one order of magnitude fewer antennas per active user per cell. The proposed architecture is based on a family of network-MIMO schemes defined by small clusters of cooperating base stations, zero-forcing multiuser MIMO precoding with suitable inter-cluster interference constraints, uplink pilot signals reuse across cells, and frequency reuse. The key idea consists of partitioning the users population into geographically determined "bins", such that all users in the same bin are statistically equivalent, and use the optimal network-MIMO architecture in the family for each bin. A scheduler takes care of serving the different bins on the time-frequency slots, in order to maximize a desired network utility function that captures some desired notion of fairness. This results in a mixed-mode network-MIMO architecture, where different schemes, each of which is optimized for the served user bin, are multiplexed in time-frequency. In order to carry out the performance analysis and the optimization of the proposed architecture in a clean and computationally efficient way, we consider the large-system regime where the number of users, the number of antennas, and the channel coherence block length go to infinity with fixed ratios. The performance predicted by the large-system asymptotic analysis matches very well the finite-dimensional simulations. Overall, the system spectral efficiency obtained by the proposed architecture is similar to that achieved by "Massive MIMO", with a 10-fold reduction in the number of antennas at the base stations (roughly, from 500 to 50 antennas).Comment: Full version with appendice (proofs of theorems). A shortened version without appendice was submitted to IEEE Trans. on Wireless Commun. Appendix B was revised after submissio

    Towards a Realistic Assessment of Multiple Antenna HCNs: Residual Additive Transceiver Hardware Impairments and Channel Aging

    Get PDF
    Given the critical dependence of broadcast channels by the accuracy of channel state information at the transmitter (CSIT), we develop a general downlink model with zero-forcing (ZF) precoding, applied in realistic heterogeneous cellular systems with multiple antenna base stations (BSs). Specifically, we take into consideration imperfect CSIT due to pilot contamination, channel aging due to users relative movement, and unavoidable residual additive transceiver hardware impairments (RATHIs). Assuming that the BSs are Poisson distributed, the main contributions focus on the derivations of the upper bound of the coverage probability and the achievable user rate for this general model. We show that both the coverage probability and the user rate are dependent on the imperfect CSIT and RATHIs. More concretely, we quantify the resultant performance loss of the network due to these effects. We depict that the uplink RATHIs have equal impact, but the downlink transmit BS distortion has a greater impact than the receive hardware impairment of the user. Thus, the transmit BS hardware should be of better quality than user's receive hardware. Furthermore, we characterise both the coverage probability and user rate in terms of the time variation of the channel. It is shown that both of them decrease with increasing user mobility, but after a specific value of the normalised Doppler shift, they increase again. Actually, the time variation, following the Jakes autocorrelation function, mirrors this effect on coverage probability and user rate. Finally, we consider space division multiple access (SDMA), single user beamforming (SU-BF), and baseline single-input single-output (SISO) transmission. A comparison among these schemes reveals that the coverage by means of SU-BF outperforms SDMA in terms of coverage.Comment: accepted in IEEE TV

    Electromagnetic Lens-focusing Antenna Enabled Massive MIMO: Performance Improvement and Cost Reduction

    Full text link
    Massive multiple-input multiple-output (MIMO) techniques have been recently advanced to tremendously improve the performance of wireless communication networks. However, the use of very large antenna arrays at the base stations (BSs) brings new issues, such as the significantly increased hardware and signal processing costs. In order to reap the enormous gain of massive MIMO and yet reduce its cost to an affordable level, this paper proposes a novel system design by integrating an electromagnetic (EM) lens with the large antenna array, termed the EM-lens enabled MIMO. The EM lens has the capability of focusing the power of an incident wave to a small area of the antenna array, while the location of the focal area varies with the angle of arrival (AoA) of the wave. Therefore, in practical scenarios where the arriving signals from geographically separated users have different AoAs, the EM-lens enabled system provides two new benefits, namely energy focusing and spatial interference rejection. By taking into account the effects of imperfect channel estimation via pilot-assisted training, in this paper we analytically show that the average received signal-to-noise ratio (SNR) in both the single-user and multiuser uplink transmissions can be strictly improved by the EM-lens enabled system. Furthermore, we demonstrate that the proposed design makes it possible to considerably reduce the hardware and signal processing costs with only slight degradations in performance. To this end, two complexity/cost reduction schemes are proposed, which are small-MIMO processing with parallel receiver filtering applied over subgroups of antennas to reduce the computational complexity, and channel covariance based antenna selection to reduce the required number of radio frequency (RF) chains. Numerical results are provided to corroborate our analysis.Comment: 30 pages, 9 figure
    • …
    corecore