95 research outputs found

    Leveraging synergy of SDWN and multi-layer resource management for 5G networks

    Get PDF
    Fifth-generation (5G) networks are envisioned to predispose service-oriented and flexible edge-to-core infrastructure to offer diverse applications. Convergence of software-defined networking (SDN), software-defined radio (SDR), and virtualization on the concept of software-defined wireless networking (SDWN) is a promising approach to support such dynamic networks. The principal technique behind the 5G-SDWN framework is the separation of control and data planes, from deep core entities to edge wireless access points. This separation allows the abstraction of resources as transmission parameters of users. In such user-centric and service-oriented environment, resource management plays a critical role to achieve efficiency and reliability. In this paper, we introduce a converged multi-layer resource management (CML-RM) framework for SDWN-enabled 5G networks, that involves a functional model and an optimization framework. In such framework, the key questions are if 5G-SDWN can be leveraged to enable CML-RM over the portfolio of resources, and reciprocally, if CML-RM can effectively provide performance enhancement and reliability for 5G-SDWN. In this paper, we tackle these questions by proposing a flexible protocol structure for 5G-SDWN, which can handle all the required functionalities in a more cross-layer manner. Based on this, we demonstrate how the proposed general framework of CML-RM can control the end-user quality of experience. Moreover, for two scenarios of 5G-SDWN, we investigate the effects of joint user-association and resource allocation via CML-RM to improve performance in virtualized networks

    Traffic Scheduling in Software-defined Backhaul Network

    Get PDF
    In the past few years, severe challenges have arisen for network operators, as explosive growth and service differentiation in data demands require an increasing number of network capacity as well as dynamic traffic management. To adapt to the network densification, wireless backhaul solution is attracting more and more attentions due to its flexible deployment. Meanwhile, the software-defined network (SDN) proposes an promising architecture that can achieve dynamic control and management for various functionalities. In this case, by applying the SDN architecture to wireless backhaul networks, the traffic scheduling functionality may satisfy the ever-increasing and differentiated traffic demands. To tackle the traffic demand challenges, traffic scheduling for software-defined backhaul networks (SDBN) is investigated from three aspects in this thesis. In the first aspect, various virtual networks based on service types are embedded to the same wireless backhaul infrastructure. An algorithm, named VNE-SDBN, is proposed to solve the virtual network embedding (VNE) problem to improve the performance of the revenue of infrastructure providers and virtual network request acceptance ratio by exploiting the unique characteristics of SDBNs. In the second aspect, incoming traffic is scheduled online by joint routing and resource allocation approach in backhaul networks operated in low-frequency microwave (LFM) and those operated in millimetre wave (mmW). A digraph-based greedy algorithm (DBGA) is proposed considering the relationship between the degrees of vertices in the constructed interference digraph and system throughput with low complexity. In the third aspect, quality-of-service is provided in terms of delay and throughput with two proposed algorithms for backhaul networks with insufficient spectral resources. At last, as a trial research on E-band, a conceptual adaptive modulation system with channel estimation based on rain rate for E-band SDBN is proposed to exploit the rain attenuation feature of E-band. The results of the research works are mainly achieved through heuristic algorithms. Genetic algorithm, which is a meta-heuristic algorithm, is employed to obtain near-optimal solutions to the proposed NP-hard problems. Low complexity greedy algorithms are developed based on the specific problem analysis. Finally, the evaluation of proposed systems and algorithms are performed through numerical simulations. Simulations for backhaul networks with respect to VNE, routing and resource allocation are developed

    Separation Framework: An Enabler for Cooperative and D2D Communication for Future 5G Networks

    Get PDF
    Soaring capacity and coverage demands dictate that future cellular networks need to soon migrate towards ultra-dense networks. However, network densification comes with a host of challenges that include compromised energy efficiency, complex interference management, cumbersome mobility management, burdensome signaling overheads and higher backhaul costs. Interestingly, most of the problems, that beleaguer network densification, stem from legacy networks' one common feature i.e., tight coupling between the control and data planes regardless of their degree of heterogeneity and cell density. Consequently, in wake of 5G, control and data planes separation architecture (SARC) has recently been conceived as a promising paradigm that has potential to address most of aforementioned challenges. In this article, we review various proposals that have been presented in literature so far to enable SARC. More specifically, we analyze how and to what degree various SARC proposals address the four main challenges in network densification namely: energy efficiency, system level capacity maximization, interference management and mobility management. We then focus on two salient features of future cellular networks that have not yet been adapted in legacy networks at wide scale and thus remain a hallmark of 5G, i.e., coordinated multipoint (CoMP), and device-to-device (D2D) communications. After providing necessary background on CoMP and D2D, we analyze how SARC can particularly act as a major enabler for CoMP and D2D in context of 5G. This article thus serves as both a tutorial as well as an up to date survey on SARC, CoMP and D2D. Most importantly, the article provides an extensive outlook of challenges and opportunities that lie at the crossroads of these three mutually entangled emerging technologies.Comment: 28 pages, 11 figures, IEEE Communications Surveys & Tutorials 201

    Machine Learning-Enabled Resource Allocation for Underlay Cognitive Radio Networks

    Get PDF
    Due to the rapid growth of new wireless communication services and applications, much attention has been directed to frequency spectrum resources and the way they are regulated. Considering that the radio spectrum is a natural limited resource, supporting the ever increasing demands for higher capacity and higher data rates for diverse sets of users, services and applications is a challenging task which requires innovative technologies capable of providing new ways of efficiently exploiting the available radio spectrum. Consequently, dynamic spectrum access (DSA) has been proposed as a replacement for static spectrum allocation policies. The DSA is implemented in three modes including interweave, overlay and underlay mode [1]. The key enabling technology for DSA is cognitive radio (CR), which is among the core prominent technologies for the next generation of wireless communication systems. Unlike conventional radio which is restricted to only operate in designated spectrum bands, a CR has the capability to operate in different spectrum bands owing to its ability in sensing, understanding its wireless environment, learning from past experiences and proactively changing the transmission parameters as needed. These features for CR are provided by an intelligent software package called the cognitive engine (CE). In general, the CE manages radio resources to accomplish cognitive functionalities and allocates and adapts the radio resources to optimize the performance of the network. Cognitive functionality of the CE can be achieved by leveraging machine learning techniques. Therefore, this thesis explores the application of two machine learning techniques in enabling the cognition capability of CE. The two considered machine learning techniques are neural network-based supervised learning and reinforcement learning. Specifically, this thesis develops resource allocation algorithms that leverage the use of machine learning techniques to find the solution to the resource allocation problem for heterogeneous underlay cognitive radio networks (CRNs). The proposed algorithms are evaluated under extensive simulation runs. The first resource allocation algorithm uses a neural network-based learning paradigm to present a fully autonomous and distributed underlay DSA scheme where each CR operates based on predicting its transmission effect on a primary network (PN). The scheme is based on a CE with an artificial neural network that predicts the adaptive modulation and coding configuration for the primary link nearest to a transmitting CR, without exchanging information between primary and secondary networks. By managing the effect of the secondary network (SN) on the primary network, the presented technique maintains the relative average throughput change in the primary network within a prescribed maximum value, while also finding transmit settings for the CRs that result in throughput as large as allowed by the primary network interference limit. The second resource allocation algorithm uses reinforcement learning and aims at distributively maximizing the average quality of experience (QoE) across transmission of CRs with different types of traffic while satisfying a primary network interference constraint. To best satisfy the QoE requirements of the delay-sensitive type of traffics, a cross-layer resource allocation algorithm is derived and its performance is compared against a physical-layer algorithm in terms of meeting end-to-end traffic delay constraints. Moreover, to accelerate the learning performance of the presented algorithms, the idea of transfer learning is integrated. The philosophy behind transfer learning is to allow well-established and expert cognitive agents (i.e. base stations or mobile stations in the context of wireless communications) to teach newly activated and naive agents. Exchange of learned information is used to improve the learning performance of a distributed CR network. This thesis further identifies the best practices to transfer knowledge between CRs so as to reduce the communication overhead. The investigations in this thesis propose a novel technique which is able to accurately predict the modulation scheme and channel coding rate used in a primary link without the need to exchange information between the two networks (e.g. access to feedback channels), while succeeding in the main goal of determining the transmit power of the CRs such that the interference they create remains below the maximum threshold that the primary network can sustain with minimal effect on the average throughput. The investigations in this thesis also provide a physical-layer as well as a cross-layer machine learning-based algorithms to address the challenge of resource allocation in underlay cognitive radio networks, resulting in better learning performance and reduced communication overhead
    corecore