17 research outputs found

    Communications with spectrum sharing in 5g networks via drone-mounted base stations

    Get PDF
    The fifth generation wireless network is designed to accommodate enormous traffic demands for the next decade and to satisfy varying quality of service for different users. Drone-mounted base stations (DBSs) characterized by high mobility and low cost intrinsic attributes can be deployed to enhance the network capacity. In-band full-duplex (IBFD) is a promising technology for future wireless communications that can potentially enhance the spectrum efficiency and the throughput capacity. Therefore, the following issues have been identified and investigated in this dissertation in order to achieve high spectrum efficiency and high user quality of service. First, the problem of deploying DBSs is studied. Deploying more DBSs may increase the total throughput of the network but at the expense of the operation cost. The droNe-mounted bAse station PlacEment (NAPE) problem with consideration of IBFD communications and DBS backhaul is then formulated. The objective is to minimize the number of deployed DBSs while maximizing the total throughput of the network by incorporating IBFD-enabled communications for both access links and backhaul links via DBSs as relay nodes. A heuristic algorithm is proposed to solve the NAPE problem, and its performance is evaluated via extensive simulations. Second, the 3-D DBS placement problem is investigated as the communication efficiency is greatly affected by the positions of DBSs. Then, the DBS placement with IBFD communications (DSP-IBFD) problem for downlink communications is formulated, and two heuristic algorithms are proposed to solve the DSP-IBFD problem based on different DBS placement strategies. The performance of the proposed algorithms are demonstrated via extensive simulations. Third, the potential benefits of jointly optimizing the radio resource assignment and 3-D DBS placement are explored, upon which the Drone-mounted Base Station Placement with IBFD communications (DBSP-IBFD) problem is formulated. Since the DBSP-IBFD problem is NP-hard, it is then decomposed into two sub-problems: the joint bandwidth, power allocation and UE association problem and the DBS placement problem. A 1/2(1-/2^{l}})-approximation algorithm is proposed to solve the DBSP-IBFD problem based on the solutions to the two sub-problems, where l is the number of simulation runs. Simulation results demonstrate that the throughput of the proposed approximation algorithm is superior to benchmark algorithms. Fourth, the uplink communications is studied as the mobile users need to transmit and receive data to and from base stations. The Backhaul-aware Uplink communications in a full-duplex DBS-aided HetNet (BUD) problem is investigated with the objective to maximize the total throughput of the network while minimizing the number of deployed DBSs. Since the BUD problem is NP-hard, it is then decomposed into three sub-problems: the joint UE association, power and bandwidth assignment problem, the DBS placement problem and the problem of determining the number of DBSs to be deployed. The AA-BUD algorithm is proposed to solve the BUD problem with guaranteed performance based on the solutions to the three sub-problems, and its performance is demonstrated via extensive simulations. The future work comprises two parts. First, a DBS can be used to provide both communications and computing services to users. Thus, how to minimize the average latency of all users in a DBS-aided mobile edge computing network requires further investigation. Second, the short flying time of a drone limits the deployment and the performance of DBSs. Free space optics (FSO) can be utilized as the backhaul link and the energizer to provision both communication and energy to a DBS. How to optimize the charging efficiency while maximizing the total throughput of the network requires further investigation

    Control-data separation architecture for cellular radio access networks: a survey and outlook

    Get PDF
    Conventional cellular systems are designed to ensure ubiquitous coverage with an always present wireless channel irrespective of the spatial and temporal demand of service. This approach raises several problems due to the tight coupling between network and data access points, as well as the paradigm shift towards data-oriented services, heterogeneous deployments and network densification. A logical separation between control and data planes is seen as a promising solution that could overcome these issues, by providing data services under the umbrella of a coverage layer. This article presents a holistic survey of existing literature on the control-data separation architecture (CDSA) for cellular radio access networks. As a starting point, we discuss the fundamentals, concepts, and general structure of the CDSA. Then, we point out limitations of the conventional architecture in futuristic deployment scenarios. In addition, we present and critically discuss the work that has been done to investigate potential benefits of the CDSA, as well as its technical challenges and enabling technologies. Finally, an overview of standardisation proposals related to this research vision is provided

    A survey on hybrid beamforming techniques in 5G : architecture and system model perspectives

    Get PDF
    The increasing wireless data traffic demands have driven the need to explore suitable spectrum regions for meeting the projected requirements. In the light of this, millimeter wave (mmWave) communication has received considerable attention from the research community. Typically, in fifth generation (5G) wireless networks, mmWave massive multiple-input multiple-output (MIMO) communications is realized by the hybrid transceivers which combine high dimensional analog phase shifters and power amplifiers with lower-dimensional digital signal processing units. This hybrid beamforming design reduces the cost and power consumption which is aligned with an energy-efficient design vision of 5G. In this paper, we track the progress in hybrid beamforming for massive MIMO communications in the context of system models of the hybrid transceivers' structures, the digital and analog beamforming matrices with the possible antenna configuration scenarios and the hybrid beamforming in heterogeneous wireless networks. We extend the scope of the discussion by including resource management issues in hybrid beamforming. We explore the suitability of hybrid beamforming methods, both, existing and proposed till first quarter of 2017, and identify the exciting future challenges in this domain

    Drone-assisted emergency communications

    Get PDF
    Drone-mounted base stations (DBSs) have been proposed to extend coverage and improve communications between mobile users (MUs) and their corresponding macro base stations (MBSs). Different from the base stations on the ground, DBSs can flexibly fly over and close to MUs to establish a better vantage for communications. Thus, the pathloss between a DBS and an MU can be much smaller than that between the MU and MBS. In addition, by hovering in the air, the DBS can likely establish a Line-of-Sight link to the MBS. DBSs can be leveraged to recover communications in a large natural disaster struck area and to fully embody the advantage of drone-assisted communications. In order to retrieve signals from MUs in a large disaster struck area, DBSs need to overcome the large pathloss incurred by the long distance between DBSs and MBSs. This can be addressed by the following two strategies. First, placing multiple drones in a disaster struck area can be used to mitigate the problem of large backhaul pathloss. In this method, data from MUs in the disaster struck area may be forwarded by more than one drone, i.e., DBSs can enable drone-to-drone communications. Thus, the throughput from the disaster struck area can potentially be enhanced by this multi-drone strategy. A cooperative DBS placement and channel allocation algorithm is proposed to maximize the aggregated data rate from MUs in a disaster struck area. It is demonstrated by simulations that the aggregated data rate can be improved by more than 10%, as compared to the scenario without drone-to-drone communications. Second, free space optics (FSO) can be used as backhaul links to reduce the backhaul pathloss. FSO can provision a high-speed point-to-point transmission and is thus suitable for backhaul transmission. A heuristic algorithm is proposed to maximize the number of MUs that can be served by the drones by optimizing user association, DBS placement and spectrum allocation iteratively. It is demonstrated by simulations that the proposed algorithm can cover over 15% more MUs at the expense of less than 5% of the aggregated throughput. Equipping DBSs and MBSs with FSO transceivers incurs extra payload for DBSs, hence shortening the hovering time of DBSs. To prolong the hovering time of a DBS, the FSO beam is deployed to facilitate simultaneous communications and charging. The viability of this concept has been studied by varying the distance between a DBS and an MBS, in which an optimal location of the DBS is found to maximize the data throughput, while the charging power directed to the DBS from the MBS diminishes with the increasing distance between them. Future work is planned to incorporate artificial intelligence to enhance drone-assisted networking for various applications. For example, a drone equipped with a camera can be used to detect victims. By analyzing the captured pictures, the locations of the victims can be estimated by some machine learning based image processing technology

    Workload allocation in mobile edge computing empowered internet of things

    Get PDF
    In the past few years, a tremendous number of smart devices and objects, such as smart phones, wearable devices, industrial and utility components, are equipped with sensors to sense the real-time physical information from the environment. Hence, Internet of Things (IoT) is introduced, where various smart devices are connected with each other via the internet and empowered with data analytics. Owing to the high volume and fast velocity of data streams generated by IoT devices, the cloud that can provision flexible and efficient computing resources is employed as a smart brain to process and store the big data generated from IoT devices. However, since the remote cloud is far from IoT users which send application requests and await the results generated by the data processing in the remote cloud, the response time of the requests may be too long, especially unbearable for delay sensitive IoT applications. Therefore, edge computing resources (e.g., cloudlets and fog nodes) which are close to IoT devices and IoT users can be employed to alleviate the traffic load in the core network and minimize the response time for IoT users. In edge computing, the communications latency critically affects the response time of IoT user requests. Owing to the dynamic distribution of IoT users (i.e., UEs), drone base station (DBS), which can be flexibly deployed for hotspot areas, can potentially improve the wireless latency of IoT users by mitigating the heavy traffic loads of macro BSs. Drone-based communications poses two major challenges: 1) the DBS should be deployed in suitable areas with heavy traffic demands to serve more UEs; 2) the traffic loads in the network should be allocated among macro BSs and DBSs to avoid instigating traffic congestions. Therefore, a TrAffic Load baLancing (TALL) scheme in such drone-assisted fog network is proposed to minimize the wireless latency of IoT users. In the scheme, the problem is decomposed into two sub-problems, two algorithms are designed to optimize the DBS placement and user association, respectively. Extensive simulations have been set up to validate the performance of the proposed scheme. Meanwhile, various IoT applications can be run in cloudlets to reduce the response time between IoT users (e.g., user equipments in mobile networks) and cloudlets. Considering the spatial and temporal dynamics of each application\u27s workloads among cloudlets, the workload allocation among cloudlets for each IoT application affects the response time of the application\u27s requests. To solve this problem, an Application awaRE workload Allocation (AREA) scheme for edge computing based IoT is designed to minimize the response time of IoT application requests by determining the destination cloudlets for each IoT user\u27s different types of requests and the amount of computing resources allocated for each application in each cloudlet. In this scheme, both the network delay and computing delay are taken into account, i.e., IoT users\u27 requests are more likely assigned to closer and lightly loaded cloudlets. The performance of the proposed scheme has been validated by extensive simulations. In addition, the latency of data flows in IoT devices consist of both the communications latency and computing latency. When some BSs and fog nodes are lightly loaded, other overloaded BSs and fog nodes may incur congestion. Thus, a workload balancing scheme in a fog network is proposed to minimize the latency of IoT data in the communications and processing procedures by associating IoT devices to suitable BSs. Furthermore, the convergence and the optimality of the proposed workload balancing scheme has been proved. Through extensive simulations, the performance of the proposed load balancing scheme is validated

    Towards versatile access networks (Chapter 3)

    Get PDF
    Compared to its previous generations, the 5th generation (5G) cellular network features an additional type of densification, i.e., a large number of active antennas per access point (AP) can be deployed. This technique is known as massive multipleinput multiple-output (mMIMO) [1]. Meanwhile, multiple-input multiple-output (MIMO) evolution, e.g., in channel state information (CSI) enhancement, and also on the study of a larger number of orthogonal demodulation reference signal (DMRS) ports for MU-MIMO, was one of the Release 18 of 3rd generation partnership project (3GPP Rel-18) work item. This release (3GPP Rel-18) package approval, in the fourth quarter of 2021, marked the start of the 5G Advanced evolution in 3GPP. The other items in 3GPP Rel-18 are to study and add functionality in the areas of network energy savings, coverage, mobility support, multicast broadcast services, and positionin

    Performance Evaluation of Ultra-Dense Networks with Applications in Internet-of-Things

    Get PDF
    The new wireless era in the next decade and beyond would be very different from our experience nowadays. The fast pace of introducing new technologies, services, and applications requires the researchers and practitioners in the field be ready by making paradigm shifts. The stringent requirements on 5G networks, in terms of throughput, latency, and connectivity, challenge traditional incremental improvement in the network performance. This urges the development of unconventional solutions such as network densification, massive multiple-input multiple-output (massive MIMO), cloud-based radio access network (C-RAN), millimeter Waves (mmWaves), non-orthogonal multiple access (NOMA), full-duplex communication, wireless network virtualization, and proactive content-caching to name a few. Ultra-Dense Network (UDN) is one of the preeminent technologies in the racetrack towards fulfilling the requirements of next generation mobile networks. Dense networks are featured by the deployment of abundant of small cells in hotspots where immense traffic is generated. In this context, the density of small cells surpasses the active users’ density providing a new wireless environment that has never been experienced in mobile communication networks. The high density of small cells brings the serving cells much closer to the end users providing a two-fold gain where better link quality is achieved and more spatial reuse is accomplished. In this thesis, we identified the distinguishing features of dense networks which include: close proximity of many cells to a given user, potential inactivity of most base stations (BSs) due to lack of users, drastic inter-cell interference in hot-spots, capacity limitation by virtue of the backhaul bottleneck, and fundamentally different propagation environments. With these features in mind, we recognized several problems associated with the performance evaluation of UDN which require a treatment different from traditional cellular networks. Using rigorous advanced mathematical techniques along with extensive Monte Carlo simulations, we modelled and analytically studied the problems in question. Consequently, we developed several mathematical frameworks providing closed-form and easy-computable mathematical instruments which network designers and operators can use to tune the networks in order to achieve the optimal performance. Moreover, the investigations performed in this thesis furnish a solid ground for addressing more problems to better understand and exploit the UDN technology for higher performance grades. In Chapter 3, we propose the multiple association in dense network environment where the BSs are equipped with idle mode capabilities. This provides the user with a “data-shower,” where the user’s traffic is split into multiple paths, which helps overcoming the capacity limitations imposed by the backhaul links. We evaluate the performance of the proposed association scheme considering general fading channel distributions. To this end, we develop a tractable framework for the computation of the average downlink rate. In Chapter 4, we study the downlink performance of UDNs considering Stretched Exponential Path-Loss (SEPL) to capture the short distances of the communication links. Considering the idle mode probability of small cells, we draw conclusions which better reflect the performance of network densification considering SEPL model. Our findings reveal that the idle mode capabilities of the BSs provide a very useful interference mitigation technique. Another interesting insight is that the system interference in idle mode capable UDNs is upper-bounded by the interference generated from the active BSs, and in turn, this is upper-bounded by the number of active users where more active users is translated to more interference in the system. This means that the interference becomes independent of the density of the small cells as this density increases. In Chapter 5, we provide the derivation of the average secrecy rate in UDNs considering their distinct traits, namely, idle mode BSs and LOS transmission. To this end, we exploit the standard moment generating function (MGF)-based approach to derive relatively simple and easily computable expressions for the average secrecy rate considering the idle mode probability and Rician fading channel. The result of this investigation avoids the system level simulations where the performance evaluation complexity can be greatly reduced with the aid of the derived analytical expressions. In Chapter 6, we model the uplink coverage of mMTC deployment scenario considering a UDN environment. The presented analysis reveals the significant and unexpected impact of the high density of small cells in UDNs on the maximum transmit power of the MTC nodes. This finding relaxes the requirements on the maximum transmit power which in turn allows for less complexity, brings more cost savings, and yields much longer battery life. This investigation provides accurate, simple, and insightful expressions which shows the impact of every single system parameter on the network performance allowing for guided tunability of the network. Moreover, the results signify the asymptotic limits of the impact of all system parameters on the network performance. This allows for the efficient operation of the network by designing the system parameters which maximizes the network performance. In Chapter 7, we address the impact of the coexistence of MTC and HTC communications on the network performance in UDNs. In this investigation, we study the downlink network performance in terms of the coverage probability and the cell load where we propose two association schemes for the MTC devices, namely, Connect-to-Closest (C2C) and Connect-to-Active (C2A). The network performance is then analyzed and compared in both association schemes. In Chapter 8, we model the uplink coverage of HTC users and MTC devices paired together in NOMA-based radio access. Closed-form and easy-computable analytical results are derived for the considered performance metrics, namely the uplink coverage and the uplink network throughput. The analytical results, which are validated by extensive Monte Carlo simulations, reveal that increasing the density of small cells and the available bandwidth significantly improves the network performance. On the other side, the power control parameters has to be tuned carefully to approach the optimal performance of both the uplink coverage and the uplink network throughput

    Spectral and Energy Efficiency in Cellular Mobile Radio Access Networks

    Get PDF
    Driven by the widespread use of smartphones and the release of a wide range of online packet data services, an unprecedented growth in the mobile data usage has been observed over the last decade. Network operators recently realised that the traditional approach of deploying more macrocells could not cope with this continuous growth in mobile data traffic and if no actions are taken, the energy demand to run the networks, which are able to support such traffic volumes risks to become unmanageable. In this context, comprehensive investigations of different cellular network deployments, and various algorithms have been evaluated and compared against each other in this thesis, to determine the best deployment options which are able to deliver the required capacity at a minimum level of energy consumption. A new scalable base station power consumption model was proposed and a joint evaluation framework for the relative improvements in throughput, energy consumption,and energy efficiency is adopted to avoid the inherent ambiguity of using only the bit/J energy efficiency metric. This framework was applied to many cellular network cases studies including macro only, small cell only and heterogeneous networks to show that pure small cell deployments outperform the macro and heterogeneous networks in terms of the energy consumption even if the backhaul power consumption is included in the analysis. Interestingly, picocell only deployments can attain up to 3 times increase in the throughput and 2.27 times reduction in the energy consumed when compared with macro only RANs at high target capacities, while it offers 2 times more throughput and reduces the energy consumption by 12% when compared with the macro/pico HetNet deployments. Further investigations have focused on improving the macrocell RAN by adding more sectors and more antennas. Importantly, the results have shown that adding small cells to the macrocell RAN is more energy efficient than adding more sectors even if adaptive sectorisation techniques are employed. While dimensioning the network by using MIMO base stations results in less consumed energy than using SISO base stations. The impact of traffic offloading to small cell, sleep mode, and inter-cell interference coordination techniques on the throughput and energy consumption in dense heterogeneous network deployments have been investigated. Significant improvements in the throughput and energy efficiency in bit/J were observed. However, a decrease in the energy consumption is obtained only in heterogeneous networks with small cells deployed to service clusters of users. Finally, the same framework is used to evaluate the throughput and energy consumption of massive MIMO deployments to show the superiority of massive MIMOs versus macrocell RANs, small cell deployments and heterogeneous networks in terms of achieving the target capacity with a minimum level of energy consumption. 1.6 times reduction in the energy consumption is achieved by massive MIMOs when compared with picocell only RAN at the same target capacity and when the backhaul power consumption is included in the analysis
    corecore