20 research outputs found

    Efficient DSP and Circuit Architectures for Massive MIMO: State-of-the-Art and Future Directions

    Full text link
    Massive MIMO is a compelling wireless access concept that relies on the use of an excess number of base-station antennas, relative to the number of active terminals. This technology is a main component of 5G New Radio (NR) and addresses all important requirements of future wireless standards: a great capacity increase, the support of many simultaneous users, and improvement in energy efficiency. Massive MIMO requires the simultaneous processing of signals from many antenna chains, and computational operations on large matrices. The complexity of the digital processing has been viewed as a fundamental obstacle to the feasibility of Massive MIMO in the past. Recent advances on system-algorithm-hardware co-design have led to extremely energy-efficient implementations. These exploit opportunities in deeply-scaled silicon technologies and perform partly distributed processing to cope with the bottlenecks encountered in the interconnection of many signals. For example, prototype ASIC implementations have demonstrated zero-forcing precoding in real time at a 55 mW power consumption (20 MHz bandwidth, 128 antennas, multiplexing of 8 terminals). Coarse and even error-prone digital processing in the antenna paths permits a reduction of consumption with a factor of 2 to 5. This article summarizes the fundamental technical contributions to efficient digital signal processing for Massive MIMO. The opportunities and constraints on operating on low-complexity RF and analog hardware chains are clarified. It illustrates how terminals can benefit from improved energy efficiency. The status of technology and real-life prototypes discussed. Open challenges and directions for future research are suggested.Comment: submitted to IEEE transactions on signal processin

    Temporal Analysis of Measured LOS Massive MIMO Channels with Mobility

    Full text link
    The first measured results for massive multiple-input, multiple-output (MIMO) performance in a line-of-sight (LOS) scenario with moderate mobility are presented, with 8 users served by a 100 antenna base Station (BS) at 3.7 GHz. When such a large number of channels dynamically change, the inherent propagation and processing delay has a critical relationship with the rate of change, as the use of outdated channel information can result in severe detection and precoding inaccuracies. For the downlink (DL) in particular, a time division duplex (TDD) configuration synonymous with massive MIMO deployments could mean only the uplink (UL) is usable in extreme cases. Therefore, it is of great interest to investigate the impact of mobility on massive MIMO performance and consider ways to combat the potential limitations. In a mobile scenario with moving cars and pedestrians, the correlation of the MIMO channel vector over time is inspected for vehicles moving up to 29 km/h. For a 100 antenna system, it is found that the channel state information (CSI) update rate requirement may increase by 7 times when compared to an 8 antenna system, whilst the power control update rate could be decreased by at least 5 times relative to a single antenna system.Comment: Accepted for presentation at the 85th IEEE Vehicular Technology Conference in Sydney. 5 Pages. arXiv admin note: substantial text overlap with arXiv:1701.0881

    Decentralized Massive MIMO Processing Exploring Daisy-chain Architecture and Recursive Algorithms

    Full text link
    Algorithms for Massive MIMO uplink detection and downlink precoding typically rely on a centralized approach, by which baseband data from all antenna modules are routed to a central node in order to be processed. In the case of Massive MIMO, where hundreds or thousands of antennas are expected in the base-station, said routing becomes a bottleneck since interconnection throughput is limited. This paper presents a fully decentralized architecture and an algorithm for Massive MIMO uplink detection and downlink precoding based on the Stochastic Gradient Descent (SGD) method, which does not require a central node for these tasks. Through a recursive approach and very low complexity operations, the proposed algorithm provides a good trade-off between performance, interconnection throughput and latency. Further, our proposed solution achieves significantly lower interconnection data-rate than other architectures, enabling future scalability.Comment: Manuscript accepted for publication in IEEE Transactions on Signal Processin

    On the Achievable Rates of Decentralized Equalization in Massive MU-MIMO Systems

    Full text link
    Massive multi-user (MU) multiple-input multiple-output (MIMO) promises significant gains in spectral efficiency compared to traditional, small-scale MIMO technology. Linear equalization algorithms, such as zero forcing (ZF) or minimum mean-square error (MMSE)-based methods, typically rely on centralized processing at the base station (BS), which results in (i) excessively high interconnect and chip input/output data rates, and (ii) high computational complexity. In this paper, we investigate the achievable rates of decentralized equalization that mitigates both of these issues. We consider two distinct BS architectures that partition the antenna array into clusters, each associated with independent radio-frequency chains and signal processing hardware, and the results of each cluster are fused in a feedforward network. For both architectures, we consider ZF, MMSE, and a novel, non-linear equalization algorithm that builds upon approximate message passing (AMP), and we theoretically analyze the achievable rates of these methods. Our results demonstrate that decentralized equalization with our AMP-based methods incurs no or only a negligible loss in terms of achievable rates compared to that of centralized solutions.Comment: Will be presented at the 2017 IEEE International Symposium on Information Theor

    Performance characterization of a real-time massive MIMO system with LOS mobile channels

    Get PDF
    The first measured results for massive MIMO performance in a line-of-sight (LOS) scenario with moderate mobility are presented, with 8 users served in real-time using a 100-antenna base Station (BS) at 3.7 GHz. When such a large number of channels dynamically change, the inherent propagation and processing delay has a critical relationship with the rate of change, as the use of outdated channel information can result in severe detection and precoding inaccuracies. For the downlink (DL) in particular, a time division duplex (TDD) configuration synonymous with massive multiple-input, multiple-output (MIMO) deployments could mean only the uplink (UL) is usable in extreme cases. Therefore, it is of great interest to investigate the impact of mobility on massive MIMO performance and consider ways to combat the potential limitations. In a mobile scenario with moving cars and pedestrians, the massive MIMO channel is sampled across many points in space to build a picture of the overall user orthogonality, and the impact of both azimuth and elevation array configurations are considered. Temporal analysis is also conducted for vehicles moving up to 29km/h and real-time bit error rates (BERs) for both the UL and DL without power control are presented. For a 100-antenna system, it is found that the channel state information (CSI) update rate requirement may increase by 7 times when compared to an 8-antenna system, whilst the power control update rate could be decreased by at least 5 times relative to a single antenna system.Comment: Submitted to the 2017 IEEE JSAC Special Issue on Deployment Issues and Performance Challenges for 5G, IEEE Journal on Selected Areas in Communications, 2017, vol.PP, no.99, pp.1-

    System capacity enhancement for 5G network and beyond

    Get PDF
    A thesis submitted to the University of Bedfordshire, in fulfilment of the requirements for the degree of Doctor of PhilosophyThe demand for wireless digital data is dramatically increasing year over year. Wireless communication systems like Laptops, Smart phones, Tablets, Smart watch, Virtual Reality devices and so on are becoming an important part of people’s daily life. The number of mobile devices is increasing at a very fast speed as well as the requirements for mobile devices such as super high-resolution image/video, fast download speed, very short latency and high reliability, which raise challenges to the existing wireless communication networks. Unlike the previous four generation communication networks, the fifth-generation (5G) wireless communication network includes many technologies such as millimetre-wave communication, massive multiple-input multiple-output (MIMO), visual light communication (VLC), heterogeneous network (HetNet) and so forth. Although 5G has not been standardised yet, these above technologies have been studied in both academia and industry and the goal of the research is to enhance and improve the system capacity for 5G networks and beyond by studying some key problems and providing some effective solutions existing in the above technologies from system implementation and hardware impairments’ perspective. The key problems studied in this thesis include interference cancellation in HetNet, impairments calibration for massive MIMO, channel state estimation for VLC, and low latency parallel Turbo decoding technique. Firstly, inter-cell interference in HetNet is studied and a cell specific reference signal (CRS) interference cancellation method is proposed to mitigate the performance degrade in enhanced inter-cell interference coordination (eICIC). This method takes carrier frequency offset (CFO) and timing offset (TO) of the user’s received signal into account. By reconstructing the interfering signal and cancelling it afterwards, the capacity of HetNet is enhanced. Secondly, for massive MIMO systems, the radio frequency (RF) impairments of the hardware will degrade the beamforming performance. When operated in time duplex division (TDD) mode, a massive MIMO system relies on the reciprocity of the channel which can be broken by the transmitter and receiver RF impairments. Impairments calibration has been studied and a closed-loop reciprocity calibration method is proposed in this thesis. A test device (TD) is introduced in this calibration method that can estimate the transmitters’ impairments over-the-air and feed the results back to the base station via the Internet. The uplink pilots sent by the TD can assist the BS receivers’ impairment estimation. With both the uplink and downlink impairments estimates, the reciprocity calibration coefficients can be obtained. By computer simulation and lab experiment, the performance of the proposed method is evaluated. Channel coding is an essential part of a wireless communication system which helps fight with noise and get correct information delivery. Turbo codes is one of the most reliable codes that has been used in many standards such as WiMAX and LTE. However, the decoding process of turbo codes is time-consuming and the decoding latency should be improved to meet the requirement of the future network. A reverse interleave address generator is proposed that can reduce the decoding time and a low latency parallel turbo decoder has been implemented on a FPGA platform. The simulation and experiment results prove the effectiveness of the address generator and show that there is a trade-off between latency and throughput with a limited hardware resource. Apart from the above contributions, this thesis also investigated multi-user precoding for MIMO VLC systems. As a green and secure technology, VLC is achieving more and more attention and could become a part of 5G network especially for indoor communication. For indoor scenario, the MIMO VLC channel could be easily ill-conditioned. Hence, it is important to study the impact of the channel state to the precoding performance. A channel state estimation method is proposed based on the signal to interference noise ratio (SINR) of the users’ received signal. Simulation results show that it can enhance the capacity of the indoor MIMO VLC system

    A new processing approach for reducing computational complexity in cloud-RAN mobile networks

    Get PDF
    Cloud computing is considered as one of the key drivers for the next generation of mobile networks (e.g. 5G). This is combined with the dramatic expansion in mobile networks, involving millions (or even billions) of subscribers with a greater number of current and future mobile applications (e.g. IoT). Cloud Radio Access Network (C-RAN) architecture has been proposed as a novel concept to gain the benefits of cloud computing as an efficient computing resource, to meet the requirements of future cellular networks. However, the computational complexity of obtaining the channel state information in the full-centralized C-RAN increases as the size of the network is scaled up, as a result of enlargement in channel information matrices. To tackle this problem of complexity and latency, MapReduce framework and fast matrix algorithms are proposed. This paper presents two levels of complexity reduction in the process of estimating the channel information in cellular networks. The results illustrate that complexity can be minimized from O(N3) to O((N/k)3), where N is the total number of RRHs and k is the number of RRHs per group, by dividing the processing of RRHs into parallel groups and harnessing the MapReduce parallel algorithm in order to process them. The second approach reduces the computation complexity from O((N/k)3) to O((N/k)2:807) using the algorithms of fast matrix inversion. The reduction in complexity and latency leads to a significant improvement in both the estimation time and in the scalability of C-RAN networks

    Joint Radar and Communication Design: Applications, State-of-the-Art, and the Road Ahead

    Get PDF
    Sharing of the frequency bands between radar and communication systems has attracted substantial attention, as it can avoid under-utilization of otherwise permanently allocated spectral resources, thus improving efficiency. Further, there is increasing demand for radar and communication systems that share the hardware platform as well as the frequency band, as this not only decongests the spectrum, but also benefits both sensing and signaling operations via the full cooperation between both functionalities. Nevertheless, the success of spectrum and hardware sharing between radar and communication systems critically depends on high-quality joint radar and communication designs. In the first part of this paper, we overview the research progress in the areas of radar-communication coexistence and dual-functional radar-communication (DFRC) systems, with particular emphasis on application scenarios and technical approaches. In the second part, we propose a novel transceiver architecture and frame structure for a DFRC base station (BS) operating in the millimeter wave (mmWave) band, using the hybrid analog-digital (HAD) beamforming technique. We assume that the BS is serving a multi-antenna user equipment (UE) over a mmWave channel, and at the same time it actively detects targets. The targets also play the role of scatterers for the communication signal. In that framework, we propose a novel scheme for joint target search and communication channel estimation, which relies on omni-directional pilot signals generated by the HAD structure. Given a fully-digital communication precoder and a desired radar transmit beampattern, we propose to design the analog and digital precoders under non-convex constant-modulus (CM) and power constraints, such that the BS can formulate narrow beams towards all the targets, while pre-equalizing the impact of the communication channel. Furthermore, we design a HAD receiver that can simultaneously process signals from the UE and echo waves from the targets. By tracking the angular variation of the targets, we show that it is possible to recover the target echoes and mitigate the resulting interference to the UE signals, even when the radar and communication signals share the same signal-to-noise ratio (SNR). The feasibility and efficiency of the proposed approaches in realizing DFRC are verified via numerical simulations. Finally, the paper concludes with an overview of the open problems in the research field of communication and radar spectrum sharing (CRSS)

    Hardware-Conscious Wireless Communication System Design

    Get PDF
    The work at hand is a selection of topics in efficient wireless communication system design, with topics logically divided into two groups.One group can be described as hardware designs conscious of their possibilities and limitations. In other words, it is about hardware that chooses its configuration and properties depending on the performance that needs to be delivered and the influence of external factors, with the goal of keeping the energy consumption as low as possible. Design parameters that trade off power with complexity are identified for analog, mixed signal and digital circuits, and implications of these tradeoffs are analyzed in detail. An analog front end and an LDPC channel decoder that adapt their parameters to the environment (e.g. fluctuating power level due to fading) are proposed, and it is analyzed how much power/energy these environment-adaptive structures save compared to non-adaptive designs made for the worst-case scenario. Additionally, the impact of ADC bit resolution on the energy efficiency of a massive MIMO system is examined in detail, with the goal of finding bit resolutions that maximize the energy efficiency under various system setups.In another group of themes, one can recognize systems where the system architect was conscious of fundamental limitations stemming from hardware.Put in another way, in these designs there is no attempt of tweaking or tuning the hardware. On the contrary, system design is performed so as to work around an existing and unchangeable hardware limitation. As a workaround for the problematic centralized topology, a massive MIMO base station based on the daisy chain topology is proposed and a method for signal processing tailored to the daisy chain setup is designed. In another example, a large group of cooperating relays is split into several smaller groups, each cooperatively performing relaying independently of the others. As cooperation consumes resources (such as bandwidth), splitting the system into smaller, independent cooperative parts helps save resources and is again an example of a workaround for an inherent limitation.From the analyses performed in this thesis, promising observations about hardware consciousness can be made. Adapting the structure of a hardware block to the environment can bring massive savings in energy, and simple workarounds prove to perform almost as good as the inherently limited designs, but with the limitation being successfully bypassed. As a general observation, it can be concluded that hardware consciousness pays off
    corecore