46 research outputs found

    Dynamic RF Chain Selection for Energy Efficient and Low Complexity Hybrid Beamforming in Millimeter Wave MIMO Systems

    Get PDF
    This paper proposes a novel architecture with a framework that dynamically activates the optimal number of radio frequency (RF) chains used to implement hybrid beamforming in a millimeter wave (mmWave) multiple-input and multiple-output (MIMO) system. We use fractional programming to solve an energy efficiency maximization problem and exploit the Dinkelbach method (DM)-based framework to optimize the number of active RF chains and data streams. This solution is updated dynamically based on the current channel conditions, where the analog/digital (A/D) hybrid precoder and combiner matrices at the transmitter and the receiver, respectively, are designed using a codebook-based fast approximation solution called gradient pursuit (GP). The GP algorithm shows less run time and complexity while compared to the state-of-the-art orthogonal matching pursuit (OMP) solution. The energy and spectral efficiency performance of the proposed framework is compared with the existing state-of-the-art solutions, such as the brute force (BF), the digital beamformer, and the analog beamformer. The codebook-free approaches to design the precoders and combiners, such as alternating direction method of multipliers (ADMMs) and singular value decomposition (SVD)-based solution are also shown to be incorporated into the proposed framework to achieve better energy efficiency performance

    Energy efficient and low complexity techniques for the next generation millimeter wave hybrid MIMO systems

    Get PDF
    The fifth generation (and beyond) wireless communication systems require increased capacity, high data rates, improved coverage and reduced energy consumption. This can be potentially provided by unused available spectrum such as the Millimeter Wave (MmWave) frequency spectrum above 30 GHz. The high bandwidths for mmWave communication compared to sub-6 GHz microwave frequency bands must be traded off against increased path loss, which can be compensated using large-scale antenna arrays such as the Multiple-Input Multiple- Output (MIMO) systems. The analog/digital Hybrid Beamforming (HBF) architectures for mmWave MIMO systems reduce the hardware complexity and power consumption using fewer Radio Frequency (RF) chains and support multi-stream communication with high Spectral Efficiency (SE). Such systems can also be optimized to achieve high Energy Efficiency (EE) gains with low complexity but this has not been widely studied in the literature. This PhD project focussed on designing energy efficient and low complexity communication techniques for next generation mmWave hybrid MIMO systems. Firstly, a novel architecture with a framework that dynamically activates the optimal number of RF chains was designed. Fractional programming was used to solve an EE maximization problem and the Dinkelbach Method (DM) based framework was exploited to optimize the number of active RF chains and the data streams. The DM is an iterative and parametric algorithm where a sequence of easier problems converge to the global solution. The HBF matrices were designed using a codebook-based fast approximation solution called gradient pursuit which was introduced as a cost-effective and fast approximation algorithm. This work maximizes EE by exploiting the structure of RF chains with full resolution sampling unlike existing baseline approaches that use fixed RF chains and aim only for high SE. Secondly, an efficient sparse mmWave channel estimation algorithm was developed with low resolution Analog-to-Digital Converters (ADCs) at the receiver. The sparsity of the mmWave channel was exploited and the estimation problem was tackled using compressed sensing through the Stein's unbiased risk estimate based parametric denoiser. The Expectation-maximization density estimation was used to avoid the need to specify the channel statistics. Furthermore, an energy efficient mmWave hybrid MIMO system was developed with Digital-to- Analog Converters (DACs) at the transmitter where the best subset of the active RF chains and the DAC resolution were selected. A novel technique based on the DM and subset selection optimization was implemented for EE maximization. This work exploits the low resolution sampling at the converting units and provides more efficient solutions in terms of EE and channel estimation than existing baselines in the literature. Thirdly, the DAC and ADC bit resolutions and the HBF matrices were jointly optimized for EE maximization. The flexibility in choosing the bit resolution for each DAC and ADC was considered and they were optimized on a frame-by-frame basis unlike the existing approaches, based on the fixed resolution sampling. A novel decomposition of the HBF matrices to three parts was introduced to represent the analog beamformer matrix, the DAC/ADC bit resolution matrix and the baseband beamformer matrix. The alternating direction method of multipliers was used to solve this matrix factorization problem as it has been successfully applied to other non-convex matrix factorization problems in the literature. This work considers EE maximization with low resolution sampling at both the DACs and the ADCs simultaneously, and jointly optimizes the HBF and DAC/ADC bit resolution matrices, unlike the existing baselines that use fixed bit resolution or otherwise optimize either DAC/ADC bit resolution or HBF matrices

    Compressive Sensing of Multiband Spectrum towards Real-World Wideband Applications.

    Get PDF
    PhD Theses.Spectrum scarcity is a major challenge in wireless communication systems with their rapid evolutions towards more capacity and bandwidth. The fact that the real-world spectrum, as a nite resource, is sparsely utilized in certain bands spurs the proposal of spectrum sharing. In wideband scenarios, accurate real-time spectrum sensing, as an enabler of spectrum sharing, can become ine cient as it naturally requires the sampling rate of the analog-to-digital conversion to exceed the Nyquist rate, which is resourcecostly and energy-consuming. Compressive sensing techniques have been applied in wideband spectrum sensing to achieve sub-Nyquist-rate sampling of frequency sparse signals to alleviate such burdens. A major challenge of compressive spectrum sensing (CSS) is the complexity of the sparse recovery algorithm. Greedy algorithms achieve sparse recovery with low complexity but the required prior knowledge of the signal sparsity. A practical spectrum sparsity estimation scheme is proposed. Furthermore, the dimension of the sparse recovery problem is proposed to be reduced, which further reduces the complexity and achieves signal denoising that promotes recovery delity. The robust detection of incumbent radio is also a fundamental problem of CSS. To address the energy detection problem in CSS, the spectrum statistics of the recovered signals are investigated and a practical threshold adaption scheme for energy detection is proposed. Moreover, it is of particular interest to seek the challenges and opportunities to implement real-world CSS for systems with large bandwidth. Initial research on the practical issues towards the real-world realization of wideband CSS system based on the multicoset sampler architecture is presented. In all, this thesis provides insights into two critical challenges - low-complexity sparse recovery and robust energy detection - in the general CSS context, while also looks into some particular issues towards the real-world CSS implementation based on the i multicoset sampler

    Massive MIMO 시스템을 위한 채널 추정 및 피드백 기법

    Get PDF
    학위논문 (박사)-- 서울대학교 대학원 : 전기·컴퓨터공학부, 2017. 2. 이정우.To meet the demand of high throughput in next generation wireless systems, various directions for physical layer evolution are being explored. Massive multiple-input multiple-output (MIMO) systems, characterized by a large number of antennas at the transmitter, are expected to become a key enabler for spectral efficiency improvement. In massive MIMO systems, thanks to the orthogonality between different users' channels, high spectral and energy efficiency can be achieved through simple signal processing techniques. However, to get such advantages, accurate channel state information (CSI) needs to be available, and acquiring CSI in massive MIMO systems is challenging due to the increased channel dimension. In frequency division duplexing (FDD) systems, where CSI at the transmitter is achieved through downlink training and uplink feedback, the overhead for the training and feedback increases proportionally to the number of antennas, and the resource for data transmission becomes scarce in massive MIMO systems. In time division duplexing (TDD) systems, where the channel reciprocity holds and the downlink CSI can be obtained through uplink training, pilot contamination due to correlated pilots becomes a performance bottleneck when the number of antennas increases. In this dissertation, I propose efficient CSI acquisition techniques for various massive MIMO systems. First, I develop a downlink training technique for FDD massive MIMO systems, which estimates the downlink channel with small overhead. To this end, compressed sensing tools are utilized, and the training overhead can be highly reduced by exploiting the previous channel information. Next, a limited feedback scheme is developed for FDD massive MIMO systems. The proposed scheme reduces the feedback overhead using a dimension reduction technique that exploits spatial and temporal correlation of the channel. Lastly, I analyze the effect of pilot contamination, which has been regarded as a performance bottleneck in multi-cell massive MIMO systems, and propose two uplink training strategies. An iterative pilot design scheme is developed for small networks, and a scalable training framework is also proposed for networks with many cells.1 Introduction 1 1.1 Massive MIMO 1 1.2 CSI Acquisition in Massive MIMO Systems 3 1.3 Contributions and Organization 6 1.4 Notations 7 2 Compressed Sensing-Aided Downlink Training 9 2.1 Introduction 10 2.2 System Model 13 2.2.1 Channel Model 13 2.2.2 Downlink Channel Estimation 16 2.3 CS-Aided Channel Training 19 2.3.1 Training Sequence Design 20 2.3.2 Channel Estimation 21 2.3.3 Estimation Error 23 2.4 Discussions 26 2.4.1 Design of Measurement Matrix 26 2.4.2 Extension to MIMO Systems 27 2.4.3 Comparison to CS with Partial Support Information 28 2.5 Simulation Results 29 2.6 Conclusion 37 3 Projection-Based Differential Feedback 39 3.1 Introduction 40 3.2 System Model 44 3.2.1 Multi-User Beamforming with Limited Feedback 45 3.2.2 Massive MIMO Channel 47 3.3 Projection-Based Differential Feedback 48 3.3.1 Projection-Based Differential Feedback Framework 48 3.3.2 Projection for PBDF Framework 51 3.3.3 Efficient Algorithm 57 3.4 Discussions 58 3.4.1 Projection with Imperfect CSIR 58 3.4.2 Acquisition of Channel Statistics 61 3.5 Simulation Results 62 3.6 Conclusion 69 4 Mitigating Pilot Contamination via Pilot Design 71 4.1 Introduction 72 4.2 System Model 73 4.2.1 Multi-cell Massive MIMO Systems 74 4.2.2 Uplink Channel Training 75 4.2.3 Data Transmission 77 4.3 Iterative Pilot Design Algorithm 78 4.3.1 Algorithm 79 4.3.2 Proof of Convergence 81 4.4 Generalized Pilot Reuse 81 4.4.1 Concept of Pilot Reuse Schemes 81 4.4.2 Pilot Design based on Grassmannian Subspace Packing 82 4.5 Simulation Results 85 4.5.1 Iterative Pilot Design 85 4.5.2 Generalized Pilot Reuse 87 4.6 Conclusion 89 5 Conclusion 91 5.1 Summary 91 5.2 Future Directions 93 Bibliography 96 Abstract (In Korean) 109Docto

    Spectrum Sharing, Latency, and Security in 5G Networks with Application to IoT and Smart Grid

    Get PDF
    The surge of mobile devices, such as smartphones, and tables, demands additional capacity. On the other hand, Internet-of-Things (IoT) and smart grid, which connects numerous sensors, devices, and machines require ubiquitous connectivity and data security. Additionally, some use cases, such as automated manufacturing process, automated transportation, and smart grid, require latency as low as 1 ms, and reliability as high as 99.99\%. To enhance throughput and support massive connectivity, sharing of the unlicensed spectrum (3.5 GHz, 5GHz, and mmWave) is a potential solution. On the other hand, to address the latency, drastic changes in the network architecture is required. The fifth generation (5G) cellular networks will embrace the spectrum sharing and network architecture modifications to address the throughput enhancement, massive connectivity, and low latency. To utilize the unlicensed spectrum, we propose a fixed duty cycle based coexistence of LTE and WiFi, in which the duty cycle of LTE transmission can be adjusted based on the amount of data. In the second approach, a multi-arm bandit learning based coexistence of LTE and WiFi has been developed. The duty cycle of transmission and downlink power are adapted through the exploration and exploitation. This approach improves the aggregated capacity by 33\%, along with cell edge and energy efficiency enhancement. We also investigate the performance of LTE and ZigBee coexistence using smart grid as a scenario. In case of low latency, we summarize the existing works into three domains in the context of 5G networks: core, radio and caching networks. Along with this, fundamental constraints for achieving low latency are identified followed by a general overview of exemplary 5G networks. Besides that, a loop-free, low latency and local-decision based routing protocol is derived in the context of smart grid. This approach ensures low latency and reliable data communication for stationary devices. To address data security in wireless communication, we introduce a geo-location based data encryption, along with node authentication by k-nearest neighbor algorithm. In the second approach, node authentication by the support vector machine, along with public-private key management, is proposed. Both approaches ensure data security without increasing the packet overhead compared to the existing approaches

    Holographic MIMO Communications: Theoretical Foundations, Enabling Technologies, and Future Directions

    Full text link
    Future wireless systems are envisioned to create an endogenously holography-capable, intelligent, and programmable radio propagation environment, that will offer unprecedented capabilities for high spectral and energy efficiency, low latency, and massive connectivity. A potential and promising technology for supporting the expected extreme requirements of the sixth-generation (6G) communication systems is the concept of the holographic multiple-input multiple-output (HMIMO), which will actualize holographic radios with reasonable power consumption and fabrication cost. The HMIMO is facilitated by ultra-thin, extremely large, and nearly continuous surfaces that incorporate reconfigurable and sub-wavelength-spaced antennas and/or metamaterials. Such surfaces comprising dense electromagnetic (EM) excited elements are capable of recording and manipulating impinging fields with utmost flexibility and precision, as well as with reduced cost and power consumption, thereby shaping arbitrary-intended EM waves with high energy efficiency. The powerful EM processing capability of HMIMO opens up the possibility of wireless communications of holographic imaging level, paving the way for signal processing techniques realized in the EM-domain, possibly in conjunction with their digital-domain counterparts. However, in spite of the significant potential, the studies on HMIMO communications are still at an initial stage, its fundamental limits remain to be unveiled, and a certain number of critical technical challenges need to be addressed. In this survey, we present a comprehensive overview of the latest advances in the HMIMO communications paradigm, with a special focus on their physical aspects, their theoretical foundations, as well as the enabling technologies for HMIMO systems. We also compare the HMIMO with existing multi-antenna technologies, especially the massive MIMO, present various...Comment: double column, 58 page

    Intelligent-Reflecting-Surface-Assisted UAV Communications for 6G Networks

    Full text link
    In 6th-Generation (6G) mobile networks, Intelligent Reflective Surfaces (IRSs) and Unmanned Aerial Vehicles (UAVs) have emerged as promising technologies to address the coverage difficulties and resource constraints faced by terrestrial networks. UAVs, with their mobility and low costs, offer diverse connectivity options for mobile users and a novel deployment paradigm for 6G networks. However, the limited battery capacity of UAVs, dynamic and unpredictable channel environments, and communication resource constraints result in poor performance of traditional UAV-based networks. IRSs can not only reconstruct the wireless environment in a unique way, but also achieve wireless network relay in a cost-effective manner. Hence, it receives significant attention as a promising solution to solve the above challenges. In this article, we conduct a comprehensive survey on IRS-assisted UAV communications for 6G networks. First, primary issues, key technologies, and application scenarios of IRS-assisted UAV communications for 6G networks are introduced. Then, we put forward specific solutions to the issues of IRS-assisted UAV communications. Finally, we discuss some open issues and future research directions to guide researchers in related fields

    Addressing training data sparsity and interpretability challenges in AI based cellular networks

    Get PDF
    To meet the diverse and stringent communication requirements for emerging networks use cases, zero-touch arti cial intelligence (AI) based deep automation in cellular networks is envisioned. However, the full potential of AI in cellular networks remains hindered by two key challenges: (i) training data is not as freely available in cellular networks as in other fields where AI has made a profound impact and (ii) current AI models tend to have black box behavior making operators reluctant to entrust the operation of multibillion mission critical networks to a black box AI engine, which allow little insights and discovery of relationships between the configuration and optimization parameters and key performance indicators. This dissertation systematically addresses and proposes solutions to these two key problems faced by emerging networks. A framework towards addressing the training data sparsity challenge in cellular networks is developed, that can assist network operators and researchers in choosing the optimal data enrichment technique for different network scenarios, based on the available information. The framework encompasses classical interpolation techniques, like inverse distance weighted and kriging to more advanced ML-based methods, like transfer learning and generative adversarial networks, several new techniques, such as matrix completion theory and leveraging different types of network geometries, and simulators and testbeds, among others. The proposed framework will lead to more accurate ML models, that rely on sufficient amount of representative training data. Moreover, solutions are proposed to address the data sparsity challenge specifically in Minimization of drive test (MDT) based automation approaches. MDT allows coverage to be estimated at the base station by exploiting measurement reports gathered by the user equipment without the need for drive tests. Thus, MDT is a key enabling feature for data and artificial intelligence driven autonomous operation and optimization in current and emerging cellular networks. However, to date, the utility of MDT feature remains thwarted by issues such as sparsity of user reports and user positioning inaccuracy. For the first time, this dissertation reveals the existence of an optimal bin width for coverage estimation in the presence of inaccurate user positioning, scarcity of user reports and quantization error. The presented framework can enable network operators to configure the bin size for given positioning accuracy and user density that results in the most accurate MDT based coverage estimation. The lack of interpretability in AI-enabled networks is addressed by proposing a first of its kind novel neural network architecture leveraging analytical modeling, domain knowledge, big data and machine learning to turn black box machine learning models into more interpretable models. The proposed approach combines analytical modeling and domain knowledge to custom design machine learning models with the aim of moving towards interpretable machine learning models, that not only require a lesser training time, but can also deal with issues such as sparsity of training data and determination of model hyperparameters. The approach is tested using both simulated data and real data and results show that the proposed approach outperforms existing mathematical models, while also remaining interpretable when compared with black-box ML models. Thus, the proposed approach can be used to derive better mathematical models of complex systems. The findings from this dissertation can help solve the challenges in emerging AI-based cellular networks and thus aid in their design, operation and optimization
    corecore