14 research outputs found

    Optimization of multitenant radio admission control through a semi-Markov decision process

    Get PDF
    © 2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes,creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.Network slicing in future 5G systems enables the provision of multitenant networks in which a network infrastructure owned by an operator is shared among different tenants, such as mobile virtual operators, over-the-top providers or vertical market players. The support of network slicing within the radio access network requires the introduction of appropriate radio resource management functions to ensure that each tenant gets the required radio resources in accordance with the expected service level agreement (SLA). This paper addresses radio admission control (RAC) functionality in multiservice and multitenant scenarios as a mechanism for regulating the acceptance of new guaranteed bit rate service requests of different tenants. This paper proposes an optimization framework that models the RAC as a semi-Markov decision process and, as a result, derives an optimal decision-making policy that maximizes an average long-term function representing the desired optimization target. A reward function is proposed to capture the degree of tenant satisfaction with the received service in relation to the expected SLA, accounting for both the provision of excess capacity beyond the SLA and the cost associated with sporadic SLA breaches. The proposed approach is evaluated by means of simulations, and its superiority to other reference schemes in terms of reward and other key performance indicators is analyzed.Peer ReviewedPostprint (author's final draft

    Efficient radio resource management for the fifth generation slice networks

    Get PDF
    It is predicted that the IMT-2020 (5G network) will meet increasing user demands and, hence, it is therefore, expected to be as flexible as possible. The relevant standardisation bodies and academia have accepted the critical role of network slicing in the implementation of the 5G network. The network slicing paradigm allows the physical infrastructure and resources of the mobile network to be “sliced” into logical networks, which are operated by different entities, and then engineered to address the specific requirements of different verticals, business models, and individual subscribers. Network slicing offers propitious solutions to the flexibility requirements of the 5G network. The attributes and characteristics of network slicing support the multi-tenancy paradigm, which is predicted to drastically reduce the operational expenditure (OPEX) and capital expenditure (CAPEX) of mobile network operators. Furthermore, network slices enable mobile virtual network operators to compete with one another using the same physical networks but customising their slices and network operation according to their market segment's characteristics and requirements. However, owing to scarce radio resources, the dynamic characteristics of the wireless links, and its capacity, implementing network slicing at the base stations and the access network xix becomes an uphill task. Moreover, an unplanned 5G slice network deployment results in technical challenges such as unfairness in radio resource allocation, poor quality of service provisioning, network profit maximisation challenges, and rises in energy consumption in a bid to meet QoS specifications. Therefore, there is a need to develop efficient radio resource management algorithms that address the above mentioned technical challenges. The core aim of this research is to develop and evaluate efficient radio resource management algorithms and schemes that will be implemented in 5G slice networks to guarantee the QoS of users in terms of throughput and latency while ensuring that 5G slice networks are energy efficient and economically profitable. This thesis mainly addresses key challenges relating to efficient radio resource management. First, a particle swarm-intelligent profit-aware resource allocation scheme for a 5G slice network is proposed to prioritise the profitability of the network while at the same time ensuring that the QoS requirements of slice users are not compromised. It is observed that the proposed new radio swarm-intelligent profit-aware resource allocation (NR-SiRARE) scheme outperforms the LTE-OFDMA swarm-intelligent profit-aware resource (LO-SiRARE) scheme. However, the network profit for the NR-SiRARE is greatly affected by significant degradation of the path loss associated with millimetre waves. Second, this thesis examines the resource allocation challenge in a multi-tenant multi-slice multi-tier heterogeneous network. To maximise the total utility of a multi-tenant multislice multi-tier heterogeneous network, a latency-aware dynamic resource allocation problem is formulated as an optimisation problem. Via the hierarchical decomposition method for heterogeneous networks, the formulated optimisation problem is transformed to reduce the computational complexities of the proposed solutions. Furthermore, a genetic algorithmbased latency-aware resource allocation scheme is proposed to solve the maximum utility problem by considering related constraints. It is observed that GI-LARE scheme outperforms the static slicing (SS) and an optimal resource allocation (ORA) schemes. Moreover, the GI-LARE appears to be near optimal when compared with an exact solution based on spatial branch and bound. Third, this thesis addresses a distributed resource allocation problem in a multi-slice multitier multi-domain network with different players. A three-level hierarchical business model comprising InPs, MVNOs, and service providers (SP) is examined. The radio resource allocation problem is formulated as a maximum utility optimisation problem. A multi-tier multi-domain slice user matching game and a distributed backtracking multi-player multidomain games schemes are proposed to solve the maximum utility optimisation problem. The distributed backtracking scheme is based on the Fisher Market and Auction theory principles. The proposed multi-tier multi-domain scheme outperforms the GI-LARE and the SS schemes. This is attributed to the availability of resources from other InPs and MVNOs; and the flexibility associated with a multi-domain network. Lastly, an energy-efficient resource allocation problem for 5G slice networks in a highly dense heterogeneous environment is investigated. A mathematical formulation of energy-efficient resource allocation in 5G slice networks is developed as a mixed-integer linear fractional optimisation problem (MILFP). The method adopts hierarchical decomposition techniques to reduce complexities. Furthermore, the slice user association, QoS for different slice use cases, an adapted water filling algorithm, and stochastic geometry tools are employed to xxi model the global energy efficiency (GEE) of the 5G slice network. Besides, neither stochastic geometry nor a three-level hierarchical business model schemes have been employed to model the global energy efficiency of the 5G slice network in the literature, making it the first time such method will be applied to 5G slice network. With rigorous numerical simulations based on Monte-Carlo numerical simulation technique, the performance of the proposed algorithms and schemes was evaluated to show their adaptability, efficiency and robustness for a 5G slice network

    Data-Driven Network Management for Next-Generation Wireless Networks

    Get PDF
    With the commercialization and maturity of the fifth-generation (5G) wireless networks, the next-generation wireless network (NGWN) is envisioned to provide seamless connectivity for mobile user terminals (MUTs) and to support a wide range of new applications with stringent quality of service (QoS) requirements. In the NGWN, the network architecture will be highly heterogeneous due to the integration of terrestrial networks, satellite networks, and aerial networks formed by unmanned aerial vehicles (UAVs), and the network environment becomes highly dynamic because of the mobility of MUTs and the spatiotemporal variation of service demands. In order to provide high-quality services in such dynamic and heterogeneous networks, flexible, fine-grained, and adaptive network management will be essential. Recent advancements in deep learning (DL) and digital twins (DTs) have made it possible to enable data-driven solutions to support network management in the NGWN. DL methods can solve network management problems by leveraging data instead of explicit mathematical models, and DTs can facilitate DL methods by providing extensive data based on the full digital representations created for individual MUTs. Data-driven solutions that integrates DL and DT can address complicated network management problems and explore implicit network characteristics to adapt to dynamic network environments in the NGWN. However, the design of data-driven network management solutions in the NGWN meets several technical challenges: 1) how the NGWN can be configured to support multiple services with different spatiotemporal service demands while simultaneously satisfying their different QoS requirements; 2) how the multi-dimensional network resources are proactively reserved to support MUTs with different mobility patterns in a resource-efficient manner; and 3) how the heterogeneous NGWN components, including base stations (BSs), satellites, and UAVs, jointly coordinate their network resources to support dynamic service demands, etc. In this thesis, we develop efficient data-driven network management strategies in two stages, i.e., long-term network planning and real-time network operation, to address the above challenges in the NGWN. Firstly, we investigate planning-stage network configuration to satisfy different service requirements for communication services. We consider a two-tier network with one macro BS and multiple small BSs, which supports communication services with different spatiotemporal data traffic distributions. The objective is to maximize the energy efficiency of BSs by jointly configuring downlink transmission power and communication coverage for each BS. To achieve this objective, we first design a network planning scheme with flexible binary slice zooming, dual time-scale planning, and grid-based network planning. The scheme allows flexibility to differentiate the communication coverage and downlink transmission power of the same BS for different services while improving the temporal and spatial granularity of network planning. We formulate a combinatorial optimization problem in which communication coverage management and power control are mutually dependent. To solve the problem, we propose a data-driven method with two steps: 1) we propose an unsupervised-learning-assisted approach to determine the communication coverage of BSs; and 2) we derive a closed-form solution for power control. Secondly, we investigate planning-stage resource reservation for a compute-intensive service to support MUTs with different mobility patterns. The MUTs can offload their computing tasks to the computing servers deployed at the core networks, gateways, and BSs. Each computing server requires both computing and storage resources to execute computing tasks. The objective is to optimize long-term resource reservation by jointly minimizing the usage of computing, storage, and communication resources and the cost from re-configuring resource reservation. To this end, we develop a data-driven network planning scheme with two elements, i.e., multi-resource reservation and resource reservation re-configuration. First, DTs are designed for collecting MUT status data, based on which MUTs are grouped according to their mobility patterns. Then, an optimization algorithm is proposed to customize resource reservation for different groups to satisfy their different resource demands. Last, a meta-learning-based approach is proposed to re-configure resource reservation for balancing the network resource usage and the re-configuration cost. Thirdly, we investigate operation-stage computing resource allocation in a space-air-ground integrated network (SAGIN). A UAV is deployed to fly around MUTs and collect their computing tasks, while scheduling the collected computing tasks to be processed at the UAV locally or offloaded to the nearby BSs or the remote satellite. The energy budget of the UAV, intermittent connectivity between the UAV and BSs, and dynamic computing task arrival pose challenges in computing task scheduling. The objective is to design a real-time computing task scheduling policy for minimizing the delay of computing task offloading and processing in the SAGIN. To achieve the objective, we first formulate the on-line computing scheduling in the dynamic network environment as a constrained Markov decision process. Then, we develop a risk-sensitive reinforcement learning approach in which a risk value is used to represent energy consumption that exceeds the budget. By balancing the risk value and the reward from delay minimization, the UAV can explore the task scheduling policy to minimize task offloading and processing delay while satisfying the UAV energy constraint. Extensive simulation have been conducted to demonstrate that the proposed data-driven network management approach for the NGWN can achieve flexible BS configuration for multiple communication services, fine-grained multi-dimensional resource reservation for a compute-intensive service, and adaptive computing resource allocation in the dynamic SAGIN. The schemes developed in the thesis are valuable to the data-driven network planning and operation in the NGWN

    Modelling, Dimensioning and Optimization of 5G Communication Networks, Resources and Services

    Get PDF
    This reprint aims to collect state-of-the-art research contributions that address challenges in the emerging 5G networks design, dimensioning and optimization. Designing, dimensioning and optimization of communication networks resources and services have been an inseparable part of telecom network development. The latter must convey a large volume of traffic, providing service to traffic streams with highly differentiated requirements in terms of bit-rate and service time, required quality of service and quality of experience parameters. Such a communication infrastructure presents many important challenges, such as the study of necessary multi-layer cooperation, new protocols, performance evaluation of different network parts, low layer network design, network management and security issues, and new technologies in general, which will be discussed in this book

    Efficient Learning Machines

    Get PDF
    Computer scienc

    Performance Modeling and Optimization of Resource Allocation in Cloud Computing Systems

    Get PDF
    Cloud computing offers on-demand network access to the computing resources through virtualization. This paradigm shifts the computer resources to the cloud, which results in cost savings as the users leasing instead of owning these resources. Clouds will also provide power constrained mobile users accessibility to the computing resources. In this thesis, we develop performance models of these systems and optimization of their resource allocation. In the performance modeling, we assume that jobs arrive to the system according to a Poisson process and they may have quite general service time distributions. Each job may consist of multiple number of tasks with each task requiring a virtual machine (VM) for its execution. The size of a job is determined by the number of its tasks, which may be a constant or a variable. In the case of constant job size, we allow different classes of jobs, with each class being determined through their arrival and service rates and number of tasks in a job. In the variable case a job generates randomly new tasks during its service time. The latter requires dynamic assignment of VMs to a job, which will be needed in providing service to mobile users. We model the systems with both constant and variable size jobs using birth-death processes. In the case of constant job size, we determined joint probability distribution of the number of jobs from each class in the system, job blocking probabilities and distribution of the utilization of resources for systems with both homogeneous and heterogeneous types of VMs. We have also analyzed tradeoffs for turning idle servers off for power saving. In the case of variable job sizes, we have determined distribution of the number of jobs in the system and average service time of a job for systems with both infinite and finite amount of resources. We have presented numerical results and any approximations are verified by simulation. The performance results may be used in the dimensioning of cloud computing centers. Next, we have developed an optimization model that determines the job schedule, which minimizes the total power consumption of a cloud computing center. It is assumed that power consumption in a computing center is due to communications and server activities. We have assumed a distributed model, where a job may be assigned VMs on different servers, referred to as fragmented service. In this model, communications among the VMs of a job on different servers is proportional to the product of the number of VMs assigned to the job on each pair of servers which results in a quadratic network power consumption in number of job fragments. Then, we have applied integer quadratic programming and the column generation method to solve the optimization problem for large scale systems in conjunction with two different algorithms to reduce the complexity and the amount of time needed to obtain the solution. In the second phase of this work, we have formulated this optimization problem as a function of discrete-time. At each discrete-time, the job load of the system consists of new arriving jobs during the present slot and unfinished jobs from the previous slots. We have developed a technique to solve this optimization problem with full, partial and no migration of the old jobs in the system. Numerical results show that this optimization results in significant operating costs savings in the cloud computing systems

    AN EFFICIENT INTERFERENCE AVOIDANCE SCHEME FOR DEVICE-TODEVICE ENABLED FIFTH GENERATION NARROWBAND INTERNET OF THINGS NETWOKS’

    Get PDF
    Narrowband Internet of Things (NB-IoT) is a low-power wide-area (LPWA) technology built on long-term evolution (LTE) functionalities and standardized by the 3rd-Generation Partnership Project (3GPP). Due to its support for massive machine-type communication (mMTC) and different IoT use cases with rigorous standards in terms of connection, energy efficiency, reachability, reliability, and latency, NB-IoT has attracted the research community. However, as the capacity needs for various IoT use cases expand, the LTE evolved packet core (EPC) system's numerous functionalities may become overburdened and suboptimal. Several research efforts are currently in progress to address these challenges. As a result, an overview of these efforts with a specific focus on the optimized architecture of the LTE EPC functionalities, the 5G architectural design for NB-IoT integration, the enabling technologies necessary for 5G NB-IoT, 5G new radio (NR) coexistence with NB-IoT, and feasible architectural deployment schemes of NB-IoT with cellular networks is discussed. This thesis also presents cloud-assisted relay with backscatter communication as part of a detailed study of the technical performance attributes and channel communication characteristics from the physical (PHY) and medium access control (MAC) layers of the NB-IoT, with a focus on 5G. The numerous drawbacks that come with simulating these systems are explored. The enabling market for NB-IoT, the benefits for a few use cases, and the potential critical challenges associated with their deployment are all highlighted. Fortunately, the cyclic prefix orthogonal frequency division multiplexing (CPOFDM) based waveform by 3GPP NR for improved mobile broadband (eMBB) services does not prohibit the use of other waveforms in other services, such as the NB-IoT service for mMTC. As a result, the coexistence of 5G NR and NB-IoT must be manageably orthogonal (or quasi-orthogonal) to minimize mutual interference that limits the form of freedom in the waveform's overall design. As a result, 5G coexistence with NB-IoT will introduce a new interference challenge, distinct from that of the legacy network, even though the NR's coexistence with NB-IoT is believed to improve network capacity and expand the coverage of the user data rate, as well as improves robust communication through frequency reuse. Interference challenges may make channel estimation difficult for NB-IoT devices, limiting the user performance and spectral efficiency. Various existing interference mitigation solutions either add to the network's overhead, computational complexity and delay or are hampered by low data rate and coverage. These algorithms are unsuitable for an NB-IoT network owing to the low-complexity nature. As a result, a D2D communication based interference-control technique becomes an effective strategy for addressing this problem. This thesis used D2D communication to decrease the network bottleneck in dense 5G NBIoT networks prone to interference. For D2D-enabled 5G NB-IoT systems, the thesis presents an interference-avoidance resource allocation that considers the less favourable cell edge NUEs. To simplify the algorithm's computing complexity and reduce interference power, the system divides the optimization problem into three sub-problems. First, in an orthogonal deployment technique using channel state information (CSI), the channel gain factor is leveraged by selecting a probable reuse channel with higher QoS control. Second, a bisection search approach is used to find the best power control that maximizes the network sum rate, and third, the Hungarian algorithm is used to build a maximum bipartite matching strategy to choose the optimal pairing pattern between the sets of NUEs and the D2D pairs. The proposed approach improves the D2D sum rate and overall network SINR of the 5G NB-IoT system, according to the numerical data. The maximum power constraint of the D2D pair, D2D's location, Pico-base station (PBS) cell radius, number of potential reuse channels, and cluster distance impact the D2D pair's performance. The simulation results achieve 28.35%, 31.33%, and 39% SINR performance higher than the ARSAD, DCORA, and RRA algorithms when the number of NUEs is twice the number of D2D pairs, and 2.52%, 14.80%, and 39.89% SINR performance higher than the ARSAD, RRA, and DCORA when the number of NUEs and D2D pairs are equal. As a result, a D2D sum rate increase of 9.23%, 11.26%, and 13.92% higher than the ARSAD, DCORA, and RRA when the NUE’s number is twice the number of D2D pairs, and a D2D’s sum rate increase of 1.18%, 4.64% and 15.93% higher than the ARSAD, RRA and DCORA respectively, with an equal number of NUEs and D2D pairs is achieved. The results demonstrate the efficacy of the proposed scheme. The thesis also addressed the problem where the cell-edge NUE's QoS is critical to challenges such as long-distance transmission, delays, low bandwidth utilization, and high system overhead that affect 5G NB-IoT network performance. In this case, most cell-edge NUEs boost their transmit power to maximize network throughput. Integrating cooperating D2D relaying technique into 5G NB-IoT heterogeneous network (HetNet) uplink spectrum sharing increases the system's spectral efficiency and interference power, further degrading the network. Using a max-max SINR (Max-SINR) approach, this thesis proposed an interference-aware D2D relaying strategy for 5G NB-IoT QoS improvement for a cell-edge NUE to achieve optimum system performance. The Lagrangian-dual technique is used to optimize the transmit power of the cell-edge NUE to the relay based on the average interference power constraint, while the relay to the NB-IoT base station (NBS) employs a fixed transmit power. To choose an optimal D2D relay node, the channel-to-interference plus noise ratio (CINR) of all available D2D relays is used to maximize the minimum cell-edge NUE's data rate while ensuring the cellular NUEs' QoS requirements are satisfied. Best harmonic mean, best-worst, half-duplex relay selection, and a D2D communication scheme were among the other relaying selection strategies studied. The simulation results reveal that the Max-SINR selection scheme outperforms all other selection schemes due to the high channel gain between the two communication devices except for the D2D communication scheme. The proposed algorithm achieves 21.27% SINR performance, which is nearly identical to the half-duplex scheme, but outperforms the best-worst and harmonic selection techniques by 81.27% and 40.29%, respectively. As a result, as the number of D2D relays increases, the capacity increases by 14.10% and 47.19%, respectively, over harmonic and half-duplex techniques. Finally, the thesis presents future research works on interference control in addition with the open research directions on PHY and MAC properties and a SWOT (Strengths, Weaknesses, Opportunities, and Threats) analysis presented in Chapter 2 to encourage further study on 5G NB-IoT

    1999-2000 Graduate Catalog

    Get PDF

    2000-2001 Graduate Catalog

    Get PDF

    2001-2002 Graduate Catalog

    Get PDF
    corecore