178 research outputs found

    Achievable Rates and Training Overheads for a Measured LOS Massive MIMO Channel

    Get PDF
    This paper presents achievable uplink (UL) sumrate predictions for a measured line-of-sight (LOS) massive multiple-input, multiple-output (MIMO) (MMIMO) scenario and illustrates the trade-off between spatial multiplexing performance and channel de-coherence rate for an increasing number of base station (BS) antennas. In addition, an orthogonal frequency division multiplexing (OFDM) case study is formed which considers the 90% coherence time to evaluate the impact of MMIMO channel training overheads in high-speed LOS scenarios. It is shown that whilst 25% of the achievable zero-forcing (ZF) sumrate is lost when the resounding interval is increased by a factor of 4, the OFDM training overheads for a 100-antenna MMIMO BS using an LTE-like physical layer could be as low as 2% for a terminal speed of 90m/s.Comment: 4 pages, 5 figure

    D4.2 Intelligent D-Band wireless systems and networks initial designs

    Get PDF
    This deliverable gives the results of the ARIADNE project's Task 4.2: Machine Learning based network intelligence. It presents the work conducted on various aspects of network management to deliver system level, qualitative solutions that leverage diverse machine learning techniques. The different chapters present system level, simulation and algorithmic models based on multi-agent reinforcement learning, deep reinforcement learning, learning automata for complex event forecasting, system level model for proactive handovers and resource allocation, model-driven deep learning-based channel estimation and feedbacks as well as strategies for deployment of machine learning based solutions. In short, the D4.2 provides results on promising AI and ML based methods along with their limitations and potentials that have been investigated in the ARIADNE project

    A Tutorial on Environment-Aware Communications via Channel Knowledge Map for 6G

    Full text link
    Sixth-generation (6G) mobile communication networks are expected to have dense infrastructures, large-dimensional channels, cost-effective hardware, diversified positioning methods, and enhanced intelligence. Such trends bring both new challenges and opportunities for the practical design of 6G. On one hand, acquiring channel state information (CSI) in real time for all wireless links becomes quite challenging in 6G. On the other hand, there would be numerous data sources in 6G containing high-quality location-tagged channel data, making it possible to better learn the local wireless environment. By exploiting such new opportunities and for tackling the CSI acquisition challenge, there is a promising paradigm shift from the conventional environment-unaware communications to the new environment-aware communications based on the novel approach of channel knowledge map (CKM). This article aims to provide a comprehensive tutorial overview on environment-aware communications enabled by CKM to fully harness its benefits for 6G. First, the basic concept of CKM is presented, and a comparison of CKM with various existing channel inference techniques is discussed. Next, the main techniques for CKM construction are discussed, including both the model-free and model-assisted approaches. Furthermore, a general framework is presented for the utilization of CKM to achieve environment-aware communications, followed by some typical CKM-aided communication scenarios. Finally, important open problems in CKM research are highlighted and potential solutions are discussed to inspire future work

    5g new radio access and core network slicing for next-generation network services and management

    Get PDF
    In recent years, fifth-generation New Radio (5G NR) has attracted much attention owing to its potential in enhancing mobile access networks and enabling better support for heterogeneous services and applications. Network slicing has garnered substantial focus as it promises to offer a higher degree of isolation between subscribers with diverse quality-of-service requirements. Integrating 5G NR technologies, specifically the mmWave waveform and numerology schemes, with network slicing can unlock unparalleled performance so crucial to meeting the demands of high throughput and sub-millisecond latency constraints. While conceding that optimizing next-generation access network performance is extremely important, it needs to be acknowledged that doing so for the core network is equally as significant. This is majorly due to the numerous core network functions that execute control tasks to establish end-to-end user sessions and route access network traffic. Consequently, the core network has a significant impact on the quality-of-experience of the radio access network customers. Currently, the core network lacks true end-to-end slicing isolation and reliability, and thus there is a dire need to examine more stringent configurations that offer the required levels of slicing isolation for the envisioned networking landscape. Considering the factors mentioned above, a sequential approach is adopted starting with the radio access network and progressing to the core network. First, to maximize the downlink average spectral efficiency of an enhanced mobile broadband slice in a time division duplex radio access network while meeting the quality-of-service requirements, an optimization problem is formulated to determine the duplex ratio, numerology scheme, power, and bandwidth allocation. Subsequently, to minimize the uplink transmission power of an ultra-reliable low latency communications slice while satisfying the quality-of-service constraints, a second optimization problem is formulated to determine the above-mentioned parameters and allocations. Because 5G NR supports dual-band transmissions, it also facilitates the usage of different numerology schemes and duplex ratios across bands simultaneously. Both problems, being mixed-integer non-linear programming problems, are relaxed into their respective convex equivalents and subsequently solved. Next, shifting attention to aerial networks, a priority-based 5G NR unmanned aerial vehicle network (UAV) is considered where the enhanced mobile broadband and ultra-reliable low latency communications services are considered as best-effort and high-priority slices, correspondingly. Following the application of a band access policy, an optimization problem is formulated. The goal is to minimize the downlink quality-of-service gap for the best-effort service, while still meeting the quality-of-service constraints of the high-priority service. This involves the allocation of transmission power and assignment of resource blocks. Given that this problem is a mixed-integer nonlinear programming problem, a low-complexity algorithm, PREDICT, i.e., PRiority BasED Resource AllocatIon in Adaptive SliCed NeTwork, which considers the channel quality on each individual resource block over both bands, is designed to solve the problem with a more accurate accounting for high-frequency channel conditions. Transitioning to minimizing the operational latency of the core network, an integer linear programming problem is formulated to instantiate network function instances, assign them to core network servers, assign slices and users to network function instances, and allocate computational resources while maintaining virtual network function isolation and physical separation of the core network control and user planes. The actor-critic method is employed to solve this problem for three proposed core network operation configurations, each offering an added degree of reliability and isolation over the default configuration that is currently standardized by the 3GPP. Looking ahead to potential future research directions, optimizing carrier aggregation-based resource allocation across triple-band sliced access networks emerges as a promising avenue. Additionally, the integration of coordinated multi-point techniques with carrier aggregation in multi-UAV NR aerial networks is especially challenging. The introduction of added carrier frequencies and channel bandwidths, while enhancing flexibility and robustness, complicates band-slice assignments and user-UAV associations. Another layer of intriguing yet complex research involves optimizing handovers in high-mobility UAV networks, where both users and UAVs are mobile. UAV trajectory planning, which is already NP-hard even in static-user scenarios, becomes even more intricate to obtain optimal solutions in high-mobility user cases

    Spectrum Sharing, Latency, and Security in 5G Networks with Application to IoT and Smart Grid

    Get PDF
    The surge of mobile devices, such as smartphones, and tables, demands additional capacity. On the other hand, Internet-of-Things (IoT) and smart grid, which connects numerous sensors, devices, and machines require ubiquitous connectivity and data security. Additionally, some use cases, such as automated manufacturing process, automated transportation, and smart grid, require latency as low as 1 ms, and reliability as high as 99.99\%. To enhance throughput and support massive connectivity, sharing of the unlicensed spectrum (3.5 GHz, 5GHz, and mmWave) is a potential solution. On the other hand, to address the latency, drastic changes in the network architecture is required. The fifth generation (5G) cellular networks will embrace the spectrum sharing and network architecture modifications to address the throughput enhancement, massive connectivity, and low latency. To utilize the unlicensed spectrum, we propose a fixed duty cycle based coexistence of LTE and WiFi, in which the duty cycle of LTE transmission can be adjusted based on the amount of data. In the second approach, a multi-arm bandit learning based coexistence of LTE and WiFi has been developed. The duty cycle of transmission and downlink power are adapted through the exploration and exploitation. This approach improves the aggregated capacity by 33\%, along with cell edge and energy efficiency enhancement. We also investigate the performance of LTE and ZigBee coexistence using smart grid as a scenario. In case of low latency, we summarize the existing works into three domains in the context of 5G networks: core, radio and caching networks. Along with this, fundamental constraints for achieving low latency are identified followed by a general overview of exemplary 5G networks. Besides that, a loop-free, low latency and local-decision based routing protocol is derived in the context of smart grid. This approach ensures low latency and reliable data communication for stationary devices. To address data security in wireless communication, we introduce a geo-location based data encryption, along with node authentication by k-nearest neighbor algorithm. In the second approach, node authentication by the support vector machine, along with public-private key management, is proposed. Both approaches ensure data security without increasing the packet overhead compared to the existing approaches

    End-to-End Simulation of 5G mmWave Networks

    Full text link
    Due to its potential for multi-gigabit and low latency wireless links, millimeter wave (mmWave) technology is expected to play a central role in 5th generation cellular systems. While there has been considerable progress in understanding the mmWave physical layer, innovations will be required at all layers of the protocol stack, in both the access and the core network. Discrete-event network simulation is essential for end-to-end, cross-layer research and development. This paper provides a tutorial on a recently developed full-stack mmWave module integrated into the widely used open-source ns--3 simulator. The module includes a number of detailed statistical channel models as well as the ability to incorporate real measurements or ray-tracing data. The Physical (PHY) and Medium Access Control (MAC) layers are modular and highly customizable, making it easy to integrate algorithms or compare Orthogonal Frequency Division Multiplexing (OFDM) numerologies, for example. The module is interfaced with the core network of the ns--3 Long Term Evolution (LTE) module for full-stack simulations of end-to-end connectivity, and advanced architectural features, such as dual-connectivity, are also available. To facilitate the understanding of the module, and verify its correct functioning, we provide several examples that show the performance of the custom mmWave stack as well as custom congestion control algorithms designed specifically for efficient utilization of the mmWave channel.Comment: 25 pages, 16 figures, submitted to IEEE Communications Surveys and Tutorials (revised Jan. 2018

    D3.2 First performance results for multi -node/multi -antenna transmission technologies

    Full text link
    This deliverable describes the current results of the multi-node/multi-antenna technologies investigated within METIS and analyses the interactions within and outside Work Package 3. Furthermore, it identifies the most promising technologies based on the current state of obtained results. This document provides a brief overview of the results in its first part. The second part, namely the Appendix, further details the results, describes the simulation alignment efforts conducted in the Work Package and the interaction of the Test Cases. The results described here show that the investigations conducted in Work Package 3 are maturing resulting in valuable innovative solutions for future 5G systems.Fantini. R.; Santos, A.; De Carvalho, E.; Rajatheva, N.; Popovski, P.; Baracca, P.; Aziz, D.... (2014). D3.2 First performance results for multi -node/multi -antenna transmission technologies. http://hdl.handle.net/10251/7675

    System capacity enhancement for 5G network and beyond

    Get PDF
    A thesis submitted to the University of Bedfordshire, in fulfilment of the requirements for the degree of Doctor of PhilosophyThe demand for wireless digital data is dramatically increasing year over year. Wireless communication systems like Laptops, Smart phones, Tablets, Smart watch, Virtual Reality devices and so on are becoming an important part of people’s daily life. The number of mobile devices is increasing at a very fast speed as well as the requirements for mobile devices such as super high-resolution image/video, fast download speed, very short latency and high reliability, which raise challenges to the existing wireless communication networks. Unlike the previous four generation communication networks, the fifth-generation (5G) wireless communication network includes many technologies such as millimetre-wave communication, massive multiple-input multiple-output (MIMO), visual light communication (VLC), heterogeneous network (HetNet) and so forth. Although 5G has not been standardised yet, these above technologies have been studied in both academia and industry and the goal of the research is to enhance and improve the system capacity for 5G networks and beyond by studying some key problems and providing some effective solutions existing in the above technologies from system implementation and hardware impairments’ perspective. The key problems studied in this thesis include interference cancellation in HetNet, impairments calibration for massive MIMO, channel state estimation for VLC, and low latency parallel Turbo decoding technique. Firstly, inter-cell interference in HetNet is studied and a cell specific reference signal (CRS) interference cancellation method is proposed to mitigate the performance degrade in enhanced inter-cell interference coordination (eICIC). This method takes carrier frequency offset (CFO) and timing offset (TO) of the user’s received signal into account. By reconstructing the interfering signal and cancelling it afterwards, the capacity of HetNet is enhanced. Secondly, for massive MIMO systems, the radio frequency (RF) impairments of the hardware will degrade the beamforming performance. When operated in time duplex division (TDD) mode, a massive MIMO system relies on the reciprocity of the channel which can be broken by the transmitter and receiver RF impairments. Impairments calibration has been studied and a closed-loop reciprocity calibration method is proposed in this thesis. A test device (TD) is introduced in this calibration method that can estimate the transmitters’ impairments over-the-air and feed the results back to the base station via the Internet. The uplink pilots sent by the TD can assist the BS receivers’ impairment estimation. With both the uplink and downlink impairments estimates, the reciprocity calibration coefficients can be obtained. By computer simulation and lab experiment, the performance of the proposed method is evaluated. Channel coding is an essential part of a wireless communication system which helps fight with noise and get correct information delivery. Turbo codes is one of the most reliable codes that has been used in many standards such as WiMAX and LTE. However, the decoding process of turbo codes is time-consuming and the decoding latency should be improved to meet the requirement of the future network. A reverse interleave address generator is proposed that can reduce the decoding time and a low latency parallel turbo decoder has been implemented on a FPGA platform. The simulation and experiment results prove the effectiveness of the address generator and show that there is a trade-off between latency and throughput with a limited hardware resource. Apart from the above contributions, this thesis also investigated multi-user precoding for MIMO VLC systems. As a green and secure technology, VLC is achieving more and more attention and could become a part of 5G network especially for indoor communication. For indoor scenario, the MIMO VLC channel could be easily ill-conditioned. Hence, it is important to study the impact of the channel state to the precoding performance. A channel state estimation method is proposed based on the signal to interference noise ratio (SINR) of the users’ received signal. Simulation results show that it can enhance the capacity of the indoor MIMO VLC system
    • …
    corecore