58 research outputs found

    EE Optimization for Downlink NOMA-based Multi-Tier CRANs

    Get PDF
    Non-orthogonal multiple access (NOMA) is increasingly becoming very attractive in cloud radio access networks (CRANs) to further boost the overall spectral efficiency, connectivity and, capacity of such networks. This paper addresses optimizing the energy efficiency (EE) for the downlink of a NOMA-based two tiers CRAN. The stochastic geometry represented by Poisson Point Process (PPP) distribution is used to decide the number and locations of the base stations (BSs) in each tier within the coverage area. A numerical optimal solution is obtained and compared against a proposed subgradient solution, as well as another proposed unoptimized solution based on the false positioning method. For comparison purposes, two other power allocation techniques are presented to allocate different powers to various BS categories; one allocates the power to each BS based on their relative distances to the cloud-based central station and the other is the bisection based scheme. Two simulation scenarios are presented to examine the performance of the two-tier NOMA-CRANs with NOMA is adopted as the multiple access of each tier in both cases. The first scenario considers heterogeneous CRAN (NOMA-HCRAN) case by using two different BS categories in each tier, namely, the macro-BSs and the RRHs. The second scenario considers a homogeneous CRAN (NOMA-CRAN) case by using the RRHs in both tiers but each tier has different frequency layer to prevent cross tier interference. Simulation results show the promising performance gain can be achieved with the proposed techniques relative to the existing approaches. More specifically, it was illustrated that the proposed subgradient based NOMA CRAN offers better performance than the proposed false positioning based NOMA CRAN, which is in turn better than the existing techniques, in particular, the bisection and the distance based NOMA-CRAN

    Transmission of 5G signals in multicore fibers impaired by inter-core crosstalk

    Get PDF
    A capacidade de dados exigida pelo surgimento do 5G levou a mudanças na arquitetura das redes sem fios passando a incluir fibras multinúcleo (MCFs, acrónimo anglo-saxónico de multicore fibers) no fronthaul. No entanto, a transmissão de sinais nas MCFs é degradada pela interferência entre núcleos (ICXT, acrónimo anglo-saxónico de intercore crosstalk). Neste trabalho, o impacto da ICXT sobre o desempenho na transmissão de sinais CPRI (acrónimo anglo-saxónico de Common Public Radio Interface) numa rede de acesso 5G com detecção direta, suportada por MCFs homogéneas com um acoplamento reduzido entre núcleos, é estudado através de simulação numérica. A taxa de erros de bit (BER, acrónimo anglo-saxónico de bit error rate), a análise de padrões de olho, a penalidade de potência e a indisponibilidade são utilizadas como métricas para avaliar o impacto da ICXT no desempenho do sistema, considerando dois modelos para a polariza- ção dos sinais. Os resultados numéricos são obtidos através da combinação de simulação de Monte Carlo com um método semi-analítico para avaliar a BER. Para uma penalidade de potência de 1 dB, para sinais CPRI com FEC (acrónimo anglo-saxónico de forward-error correction), devido ao aumento do walkoff da MCF de 1 ps/km para 50 ps/km, a tolerância dos sinais CPRI relativamente à ICXT aumenta 1.4 dB. No entanto, para níveis de interferência que levam a uma penalidade de potência de 1 dB, o sistema está praticamente indisponível. Para alcançar uma probabilidade de indisponibilidade de 10-5 usando sinais com FEC, são necessários níveis de interferência muito mais reduzidos, abaixo de -27:8 dB e -24:8 dB, para sinais de polarização única e dupla, respectivamente. Este trabalho demonstra que é essencial estudar a indisponibilidade em vez da penalidade de potência de 1 dB para garantir a qualidade do serviço em sistemas de comunicação óptica com detecção direta suportados por MCFs homogéneas com um acoplamento reduzido entre núcleos onde a ICXT domina a degradação do desempenho.The data capacity demanded by the emergence of 5G lead to changes in the wireless network architecture with proposals including multicore fibers (MCFs) in the fronthaul. However, the transmission of signals in MCFs is impaired by intercore crosstalk (ICXT). In this work, the impact of ICXT on the transmission performance of Common Public Radio Interface (CPRI) signals in a 5G network fronthaul supported by homogeneous weakly-coupled MCFs with direct detection is studied by numerical simulation. Bit error rate (BER), eye-patterns analysis, power penalty and outage probability are used as metrics to assess the ICXT impact on the system performance, considering two models for the signals polarizations. The results are obtained by combining Monte Carlo simulation and a semi-analytical method to assess numerically the BER. For 1 dB power penalty, with forward error correction (FEC) CPRI signals, due to the increase of the MCF walkoff from 1 ps/km to 50 ps/km, an improvement of the tolerance of CPRI signals to ICXT of 1.4 dB is observed. However, for crosstalk levels that lead to 1 dB power penalty, the system is unavailable with very high outage probability. To reach a reasonable outage probability of 10−5 for FEC signals, much lower crosstalk levels, below -27:8 dB, and -24:8 dB, for single and dual polarization signals, respectively, are required. Hence, this work shows that it is essential to study the outage probability instead of the 1 dB power penalty to guarantee quality of service in direct-detection optical communication systems supported by weakly-coupled homogeneous MCFs and impaired by ICXT

    Cell fault management using machine learning techniques

    Get PDF
    This paper surveys the literature relating to the application of machine learning to fault management in cellular networks from an operational perspective. We summarise the main issues as 5G networks evolve, and their implications for fault management. We describe the relevant machine learning techniques through to deep learning, and survey the progress which has been made in their application, based on the building blocks of a typical fault management system. We review recent work to develop the abilities of deep learning systems to explain and justify their recommendations to network operators. We discuss forthcoming changes in network architecture which are likely to impact fault management and offer a vision of how fault management systems can exploit deep learning in the future. We identify a series of research topics for further study in order to achieve this

    EE Optimization for Downlink NOMA-based Multi-Tier CRANs

    Get PDF
    Non-orthogonal multiple access (NOMA) is increasingly becoming very attractive in cloud radio access networks (CRANs) to further boost the overall spectral efficiency, connectivity and, capacity of such networks. This paper addresses optimizing the energy efficiency (EE) for the downlink of a NOMA-based two tiers CRAN. The stochastic geometry represented by Poisson Point Process (PPP) distribution is used to decide the number and locations of the base stations (BSs) in each tier within the coverage area. A numerical optimal solution is obtained and compared against a proposed subgradient solution, as well as another proposed unoptimized solution based on the false positioning method. For comparison purposes, two other power allocation techniques are presented to allocate different powers to various BS categories; one allocates the power to each BS based on their relative distances to the cloud-based central station and the other is the bisection based scheme. Two simulation scenarios are presented to examine the performance of the two-tier NOMA-CRANs with NOMA is adopted as the multiple access of each tier in both cases. The first scenario considers heterogeneous CRAN (NOMA-HCRAN) case by using two different BS categories in each tier, namely, the macro-BSs and the RRHs. The second scenario considers a homogeneous CRAN (NOMA-CRAN) case by using the RRHs in both tiers but each tier has different frequency layer to prevent cross tier interference. Simulation results show the promising performance gain can be achieved with the proposed techniques relative to the existing approaches. More specifically, it was illustrated that the proposed subgradient based NOMA CRAN offers better performance than the proposed false positioning based NOMA CRAN, which is in turn better than the existing techniques, in particular, the bisection and the distance based NOMA-CRAN

    A Comprehensive Survey on Resource Allocation for CRAN in 5G and Beyond Networks

    Get PDF
    The diverse service requirements coming with the advent of sophisticated applications as well as a large number of connected devices demand for revolutionary changes in the traditional distributed radio access network (RAN). To this end, Cloud-RAN (CRAN) is considered as an important paradigm to enhance the performance of the upcoming fifth generation (5G) and beyond wireless networks in terms of capacity, latency, and connectivity to a large number of devices. Out of several potential enablers, efficient resource allocation can mitigate various challenges related to user assignment, power allocation, and spectrum management in a CRAN, and is the focus of this paper. Herein, we provide a comprehensive review of resource allocation schemes in a CRAN along with a detailed optimization taxonomy on various aspects of resource allocation. More importantly, we identity and discuss the key elements for efficient resource allocation and management in CRAN, namely: user assignment, remote radio heads (RRH) selection, throughput maximization, spectrum management, network utility, and power allocation. Furthermore, we present emerging use-cases including heterogeneous CRAN, millimeter-wave CRAN, virtualized CRAN, Non- Orthogonal Multiple Access (NoMA)-based CRAN and fullduplex enabled CRAN to illustrate how their performance can be enhanced by adopting CRAN technology. We then classify and discuss objectives and constraints involved in CRAN-based 5G and beyond networks. Moreover, a detailed taxonomy of optimization methods and solution approaches with different objectives is presented and discussed. Finally, we conclude the paper with several open research issues and future directions

    The Cloud-to-Thing Continuum

    Get PDF
    The Internet of Things offers massive societal and economic opportunities while at the same time significant challenges, not least the delivery and management of the technical infrastructure underpinning it, the deluge of data generated from it, ensuring privacy and security, and capturing value from it. This Open Access Pivot explores these challenges, presenting the state of the art and future directions for research but also frameworks for making sense of this complex area. This book provides a variety of perspectives on how technology innovations such as fog, edge and dew computing, 5G networks, and distributed intelligence are making us rethink conventional cloud computing to support the Internet of Things. Much of this book focuses on technical aspects of the Internet of Things, however, clear methodologies for mapping the business value of the Internet of Things are still missing. We provide a value mapping framework for the Internet of Things to address this gap. While there is much hype about the Internet of Things, we have yet to reach the tipping point. As such, this book provides a timely entrée for higher education educators, researchers and students, industry and policy makers on the technologies that promise to reshape how society interacts and operates

    Spectrum Sharing, Latency, and Security in 5G Networks with Application to IoT and Smart Grid

    Get PDF
    The surge of mobile devices, such as smartphones, and tables, demands additional capacity. On the other hand, Internet-of-Things (IoT) and smart grid, which connects numerous sensors, devices, and machines require ubiquitous connectivity and data security. Additionally, some use cases, such as automated manufacturing process, automated transportation, and smart grid, require latency as low as 1 ms, and reliability as high as 99.99\%. To enhance throughput and support massive connectivity, sharing of the unlicensed spectrum (3.5 GHz, 5GHz, and mmWave) is a potential solution. On the other hand, to address the latency, drastic changes in the network architecture is required. The fifth generation (5G) cellular networks will embrace the spectrum sharing and network architecture modifications to address the throughput enhancement, massive connectivity, and low latency. To utilize the unlicensed spectrum, we propose a fixed duty cycle based coexistence of LTE and WiFi, in which the duty cycle of LTE transmission can be adjusted based on the amount of data. In the second approach, a multi-arm bandit learning based coexistence of LTE and WiFi has been developed. The duty cycle of transmission and downlink power are adapted through the exploration and exploitation. This approach improves the aggregated capacity by 33\%, along with cell edge and energy efficiency enhancement. We also investigate the performance of LTE and ZigBee coexistence using smart grid as a scenario. In case of low latency, we summarize the existing works into three domains in the context of 5G networks: core, radio and caching networks. Along with this, fundamental constraints for achieving low latency are identified followed by a general overview of exemplary 5G networks. Besides that, a loop-free, low latency and local-decision based routing protocol is derived in the context of smart grid. This approach ensures low latency and reliable data communication for stationary devices. To address data security in wireless communication, we introduce a geo-location based data encryption, along with node authentication by k-nearest neighbor algorithm. In the second approach, node authentication by the support vector machine, along with public-private key management, is proposed. Both approaches ensure data security without increasing the packet overhead compared to the existing approaches
    • …
    corecore