695 research outputs found

    Linear Encoder-Decoder-Controller Design over Channels with Packet Loss and Quantization Noise

    Get PDF
    We consider a decentralized multisensor estimation problem where L sensor nodes observe noisy versions of a possibly correlated random source. The sensors amplify and forward their observations over a fading coherent multiple access channel (MAC) to a fusion center (FC). The FC is equipped with a large array of N antennas, and adopts a minimum mean square error (MMSE) approach for estimating the source. We optimize the amplification factor (or equivalently transmission power) at each sensor node in two different scenarios: 1) with the objective of total power minimization subject to mean square error (MSE) of source estimation constraint, and 2) with the objective of minimizing MSE subject to total power constraint. For this purpose, we apply an asymptotic approximation based on the massive multiple-input-multiple-output (MIMO) favorable propagation condition (when L ≪ N). We use convex optimization techniques to solve for the optimal sensor power allocation in 1) and 2). In 1), we show that the total power consumption at the sensors decays as 1/N, replicating the power savings obtained in Massive MIMO mobile communications literature. Through numerical studies, we also illustrate the superiority of the proposed optimal power allocation methods over uniform power allocation

    Efficient DSP and Circuit Architectures for Massive MIMO: State-of-the-Art and Future Directions

    Full text link
    Massive MIMO is a compelling wireless access concept that relies on the use of an excess number of base-station antennas, relative to the number of active terminals. This technology is a main component of 5G New Radio (NR) and addresses all important requirements of future wireless standards: a great capacity increase, the support of many simultaneous users, and improvement in energy efficiency. Massive MIMO requires the simultaneous processing of signals from many antenna chains, and computational operations on large matrices. The complexity of the digital processing has been viewed as a fundamental obstacle to the feasibility of Massive MIMO in the past. Recent advances on system-algorithm-hardware co-design have led to extremely energy-efficient implementations. These exploit opportunities in deeply-scaled silicon technologies and perform partly distributed processing to cope with the bottlenecks encountered in the interconnection of many signals. For example, prototype ASIC implementations have demonstrated zero-forcing precoding in real time at a 55 mW power consumption (20 MHz bandwidth, 128 antennas, multiplexing of 8 terminals). Coarse and even error-prone digital processing in the antenna paths permits a reduction of consumption with a factor of 2 to 5. This article summarizes the fundamental technical contributions to efficient digital signal processing for Massive MIMO. The opportunities and constraints on operating on low-complexity RF and analog hardware chains are clarified. It illustrates how terminals can benefit from improved energy efficiency. The status of technology and real-life prototypes discussed. Open challenges and directions for future research are suggested.Comment: submitted to IEEE transactions on signal processin

    Thirty Years of Machine Learning: The Road to Pareto-Optimal Wireless Networks

    Full text link
    Future wireless networks have a substantial potential in terms of supporting a broad range of complex compelling applications both in military and civilian fields, where the users are able to enjoy high-rate, low-latency, low-cost and reliable information services. Achieving this ambitious goal requires new radio techniques for adaptive learning and intelligent decision making because of the complex heterogeneous nature of the network structures and wireless services. Machine learning (ML) algorithms have great success in supporting big data analytics, efficient parameter estimation and interactive decision making. Hence, in this article, we review the thirty-year history of ML by elaborating on supervised learning, unsupervised learning, reinforcement learning and deep learning. Furthermore, we investigate their employment in the compelling applications of wireless networks, including heterogeneous networks (HetNets), cognitive radios (CR), Internet of things (IoT), machine to machine networks (M2M), and so on. This article aims for assisting the readers in clarifying the motivation and methodology of the various ML algorithms, so as to invoke them for hitherto unexplored services as well as scenarios of future wireless networks.Comment: 46 pages, 22 fig

    Robust Location-Aided Beam Alignment in Millimeter Wave Massive MIMO

    Full text link
    Location-aided beam alignment has been proposed recently as a potential approach for fast link establishment in millimeter wave (mmWave) massive MIMO (mMIMO) communications. However, due to mobility and other imperfections in the estimation process, the spatial information obtained at the base station (BS) and the user (UE) is likely to be noisy, degrading beam alignment performance. In this paper, we introduce a robust beam alignment framework in order to exhibit resilience with respect to this problem. We first recast beam alignment as a decentralized coordination problem where BS and UE seek coordination on the basis of correlated yet individual position information. We formulate the optimum beam alignment solution as the solution of a Bayesian team decision problem. We then propose a suite of algorithms to approach optimality with reduced complexity. The effectiveness of the robust beam alignment procedure, compared with classical designs, is then verified on simulation settings with varying location information accuracies.Comment: 24 pages, 7 figures. The short version of this paper has been accepted to IEEE Globecom 201

    Relaying systems with reciprocity mismatch : impact analysis and calibration

    Get PDF
    Cooperative beamforming can provide significant performance improvement for relaying systems with the help of the channel state information (CSI). In time-division duplexing (TDD) mode, the estimated CSI will deteriorate due to the reciprocity mismatch. In this work, we examine the impact and the calibration of the reciprocity mismatch in relaying systems. To evaluate the impact of the reciprocity mismatch for all devices, the closed-form expression of the achievable rate is first derived. Then, we analyze the performance loss caused by the reciprocity mismatch at sources, relays, and destinations respectively to show that the mismatch at relays dominates the impact. To compensate the performance loss, a two-stage calibration scheme is proposed for relays. Specifically, relays perform the intra-calibration based on circuits independently. Further, the inter-calibration based on the discrete Fourier transform (DFT) codebook is operated to improve the calibration performance by cooperation transmission, which has never been considered in previous work. Finally, we derive the achievable rate after relays perform the proposed reciprocity calibration scheme and investigate the impact of estimation errors on the system performance. Simulation results are presented to verify the analytical results and to show the performance of the proposed calibration approach
    • …
    corecore