123 research outputs found

    Proposed Technologies for Solving Future 5G Heterogeneous Networks Challenges

    Get PDF
    The evolution towards 5G mobile communication networks will be characterized by increasing number of wireless devices and service complexity, while the requirement to access mobile services will be essential. This paper presents an overview of the evolution of wireless networks, and focus on future mobile communication generation (5G) with its requirements, Challenges and Services. In addition, we propose a HetNet based architecture for 5G networks. The key ideas for each of the technologies are stated, along with the potential impact on 5G networks architecture.The proposed HetNet architecture key elements such as Small cells, Massive MIMO, mm-waves,  D2D communication, full-duplex communication, energy harvesting, Cloud-RAN and Wireless Network Virtualization,  all of these technologies serve together to ensure users with Quality of service (QoS) requirement in a spectrum & energy efficient manner. Keywords: 5G networks, wireless cellular networks, 5G networks, 5G heterogeneous network architecture, small cells, D2D communications, Massive MIMO, mm-wave, C-RAN, energy harvesting

    User Association in 5G Networks: A Survey and an Outlook

    Get PDF
    26 pages; accepted to appear in IEEE Communications Surveys and Tutorial

    Wearable flexible lightweight modular RFID tag with integrated energy harvester

    Get PDF
    A novel wearable radio frequency identification (RFID) tag with sensing, processing, and decision-taking capability is presented for operation in the 2.45-GHz RFID superhigh frequency (SHF) band. The tag is powered by an integrated light harvester, with a flexible battery serving as an energy buffer. The proposed active tag features excellent wearability, very high read range, enhanced functionality, flexible interfacing with diverse low-power sensors, and extended system autonomy through an innovative holistic microwave system design paradigm that takes antenna design into consideration from the very early stages. Specifically, a dedicated textile shorted circular patch antenna with monopolar radiation pattern is designed and optimized for highly efficient and stable operation within the frequency band of operation. In this process, the textile antenna's functionality is augmented by reusing its surface as an integration platform for light-energy-harvesting, sensing, processing, and transceiver hardware, without sacrificing antenna performance or the wearer's comfort. The RFID tag is validated by measuring its stand-alone and on-body characteristics in free-space conditions. Moreover, measurements in a real-world scenario demonstrate an indoor read range up to 23 m in nonline-of-sight indoor propagation conditions, enabling interrogation by a reader situated in another room. In addition, the RFID platform only consumes 168.3 mu W, when sensing and processing are performed every 60 s

    Reinforcement Learning Based Resource Allocation for Energy-Harvesting-Aided D2D Communications in IoT Networks

    Get PDF
    It is anticipated that mobile data traffic and the demand for higher data rates will increase dramatically as a result of the explosion of wireless devices, such as the Internet of Things (IoT) and machine-to-machine communication. There are numerous location-based peer-to-peer services available today that allow mobile users to communicate directly with one another, which can help offload traffic from congested cellular networks. In cellular networks, Device-to-Device (D2D) communication has been introduced to exploit direct links between devices instead of transmitting through a the Base Station (BS). However, it is critical to note that D2D and IoT communications are hindered heavily by the high energy consumption of mobile devices and IoT devices. This is because their battery capacity is restricted. There may be a way for energy-constrained wireless devices to extend their lifespan by drawing upon reusable external sources of energy such as solar, wind, vibration, thermoelectric, and radio frequency (RF) energy in order to overcome the limited battery problem. Such approaches are commonly referred to as Energy Harvesting (EH) There is a promising approach to energy harvesting that is called Simultaneous Wireless Information and Power Transfer (SWIPT). Due to the fact that wireless users are on the rise, it is imperative that resource allocation techniques be implemented in modern wireless networks. This will facilitate cooperation among users for limited resources, such as time and frequency bands. As well as ensuring that there is an adequate supply of energy for reliable and efficient communication, resource allocation also provides a roadmap for each individual user to follow in order to consume the right amount of energy. In D2D networks with time, frequency, and power constraints, significant computing power is generally required to achieve a joint resource management design. Thus the purpose of this study is to develop a resource allocation scheme that is based on spectrum sharing and enables low-cost computations for EH-assisted D2D and IoT communication. Until now, there has been no study examining resource allocation design for EH-enabled IoT networks with SWIPT-enabled D2D schemes that utilize learning techniques and convex optimization. In most of the works, optimization and iterative approaches with a high level of computational complexity have been used which is not feasible in many IoT applications. In order to overcome these obstacles, a learning-based resource allocation mechanism based on the SWIPT scheme in IoT networks is proposed, where users are able to harvest energy from different sources. The system model consists of multiple IoT users, one BS, and multiple D2D pairs in EH-based IoT networks. As a means of developing an energy-efficient system, we consider the SWIPT scheme with D2D pairs employing the time switching method (TS) to capture energy from the environment, whereas IoT users employ the power splitting method (PS) to harvest energy from the BS. A mixed-integer nonlinear programming (MINLP) approach is presented for the solution of the Energy Efficiency (EE) problem by jointly optimizing subchannel allocation, power-splitting factor, power, and time together. As part of the optimization approach, the original EE optimization problem is decomposed into three subproblems, namely: (a) subchannel assignment and power splitting factor, (b) power allocation, and (c) time allocation. In order to solve the subproblem assignment problem, which involves discrete variables, the Q-learning approach is employed. Due to the large size of the overall problem and the continuous nature of certain variables, it is impractical to optimize all variables by using the learning technique. Instead dealing for the continuous variable problems, namely power and time allocation, the original non-convex problem is first transformed into a convex one, then the Majorization-Minimization (MM) approach is applied as well as the Dinkelbach. The performance of the proposed joint Q-learning and optimization algorithm has been evaluated in detail. In particular, the solution was compared with a linear EH model, as well as two heuristic algorithms, namely the constrained allocation algorithm and the random allocation algorithm, in order to determine its performance. The results indicate that the technique is superior to conventional approaches. For example, it can be seen that for the distance of d=10d = 10 m, our proposed algorithm leads to EE improvement when compared to the method such as prematching algorithm, constrained allocation, and random allocation methods by about 5.26\%, 110.52\%, and 143.90\%, respectively. Considering the simulation results, the proposed algorithm is superior to other methods in the literature. Using spectrum sharing and harvesting energy from D2D and IoT devices achieves impressive EE gains. This superior performance can be seen both in terms of the average and sum EEs, as well as when compared to other baseline schemes

    Energy efficient planning and operation models for wireless cellular networks

    Get PDF
    Prospective demands of next-generation wireless networks are ambitious and will require cellular networks to support 1000 times higher data rates and 10 times lower round-trip latency. While this data deluge is a natural outcome of the increasing number of mobile devices with data hungry applications and the internet of things (IoT), the low latency demand is required by the future interactive applications such as tactile internet , virtual and enhanced reality, and online internet gaming, etc. The motivation behind this thesis is to meet the increasing quality of service (QoS) demands in wireless communications and reduce the global carbon footprint at the same time. To achieve these goals, energy efficient planning and operations models for wireless cellular networks are proposed and analyzed. Firstly, a solution based on the overlay cognitive radio (CR) along with cooperative relaying is proposed to reduce the effect of the scarcity problem of the radio spectrum. In overlay technique, the primary users (PUs) cooperate with cognitive users (CUs) for mutual benefits. The achievable cognitive rate of two-way relaying (TWR) system assisted by multiple antennas is proposed. Compared to traditional relaying where the transmission to exchange two different messages between two sources takes place in four time slots, using TWR, the required number of transmission slots reduces to two slots. In the first slot, both sources transmit their signals simultaneously to the relay. Then, during the second slot the relay broadcasts its signal to the sources. Using an overlay CR technique, the CUs are allowed to allocate part of the PUs\u27 spectrum to perform their cognitive transmission. In return, acting as amplify-and-forward (AF) TWR, the CUs are exploited to support PUs to reach their target data rates over the remaining bandwidth. A meta-heuristic approach based on particle swarm optimization algorithm is proposed to find a near optimal resource allocation in addition to the relay amplification matrix gains. Then, we investigate a multiple relay selection scheme for energy harvesting (EH)-based on TWR system. All the relays are considered as EH nodes that harvest energy from renewable and radio frequency sources, where the relays forward the information to the sources. The power-splitting protocol, in which the receiver splits the input radio frequency signal into two components: one for information transmission and the other for energy harvesting, is adopted at the relay side. An approximate optimization framework based on geometric programming is established in a convex form to find near optimal PS ratios, the relays’ transmission power, and the selected relays in order to maximize the total rate utility over multiple time slots. Different utility metrics are considered and analyzed depending on the level of fairness. Secondly, a downlink resource and energy management approach for heterogeneous networks (HetNets) is proposed, where all base stations (BSs) are equipped to harvest energy from renewable energy (RE) sources. A hybrid power supply of green (renewable) and traditional micro-grid, such that the traditional micro-grid is not exploited as long as the BSs can meet their power demands from harvested and stored green energy. Furthermore, a dynamic BS switching ON/OFF combined with the EH model, where some BSs are turned off due to the low traffic periods and their stored energy in order to harvest more energy and help efficiently during the high traffic periods. A binary linear programming (BLP) optimization problem is formulated and solved optimally to minimize the network-wide energy consumption subject to users\u27 certain quality of service and BSs\u27 power consumption constraints. Moreover, green communication algorithms are implemented to solve the problem with low complexity time. Lastly, an energy management framework for cellular HetNets supported by dynamic drone small cells is proposed. A three-tier HetNet composed of a macrocell BS, micro cell BSs (MBSs), and solar powered drone small cell BSs are deployed to serve the networks\u27 subscribers. In addition to the RE, the drones can power their batteries via a charging station located at the macrocell BS site. Pre-planned locations are identified by the mobile operator for possible drones\u27 placement. The objective of this framework is to jointly determine the optimal locations of the drones in addition to the MBSs that can be safely turned off in order to minimize the daily energy consumption of the network. The framework takes also into account the cells\u27 capacities and the QoS level defined by the minimum required receiving power. A BLP problem is formulated to optimally determine the network status during a time-slotted horizon

    Integrated Data and Energy Communication Network: A Comprehensive Survey

    Get PDF
    OAPA In order to satisfy the power thirsty of communication devices in the imminent 5G era, wireless charging techniques have attracted much attention both from the academic and industrial communities. Although the inductive coupling and magnetic resonance based charging techniques are indeed capable of supplying energy in a wireless manner, they tend to restrict the freedom of movement. By contrast, RF signals are capable of supplying energy over distances, which are gradually inclining closer to our ultimate goal – charging anytime and anywhere. Furthermore, transmitters capable of emitting RF signals have been widely deployed, such as TV towers, cellular base stations and Wi-Fi access points. This communication infrastructure may indeed be employed also for wireless energy transfer (WET). Therefore, no extra investment in dedicated WET infrastructure is required. However, allowing RF signal based WET may impair the wireless information transfer (WIT) operating in the same spectrum. Hence, it is crucial to coordinate and balance WET and WIT for simultaneous wireless information and power transfer (SWIPT), which evolves to Integrated Data and Energy communication Networks (IDENs). To this end, a ubiquitous IDEN architecture is introduced by summarising its natural heterogeneity and by synthesising a diverse range of integrated WET and WIT scenarios. Then the inherent relationship between WET and WIT is revealed from an information theoretical perspective, which is followed by the critical appraisal of the hardware enabling techniques extracting energy from RF signals. Furthermore, the transceiver design, resource allocation and user scheduling as well as networking aspects are elaborated on. In a nutshell, this treatise can be used as a handbook for researchers and engineers, who are interested in enriching their knowledge base of IDENs and in putting this vision into practice

    Towards 6G-Enabled Internet of Things with IRS-Empowered Backscatter-Assisted WPCNs

    Get PDF
    Wireless powered communication networks (WPCNs) are expected to play a key role in the forthcoming 6G systems. However, they have not yet found their way to large-scale practical implementations due to their inherent shortcomings such as the low efficiency of energy transfer and information transmission. In this thesis, we aim to study the integration of WPCNs with other novel technologies of backscatter communication and intelligent reflecting surface (IRS) to enhance the performance and improve the efficiency of these networks so as to prepare them for being seamlessly fitted into the 6G ecosystem. We first study the incorporation of backscatter communication into conventional WPCNs and investigate the performance of backscatter-assisted WPCNs (BS-WPCNs). We then study the inclusion of IRS into the WPCN environment, where an IRS is used for improving the performance of energy transfer and information transmission in WPCNs. After that, the simultaneous integration of backscatter communication and IRS technologies into WPCNs is investigated, where the analyses show the significant performance gains that can be achieved by this integration

    A survey of multi-access edge computing in 5G and beyond : fundamentals, technology integration, and state-of-the-art

    Get PDF
    Driven by the emergence of new compute-intensive applications and the vision of the Internet of Things (IoT), it is foreseen that the emerging 5G network will face an unprecedented increase in traffic volume and computation demands. However, end users mostly have limited storage capacities and finite processing capabilities, thus how to run compute-intensive applications on resource-constrained users has recently become a natural concern. Mobile edge computing (MEC), a key technology in the emerging fifth generation (5G) network, can optimize mobile resources by hosting compute-intensive applications, process large data before sending to the cloud, provide the cloud-computing capabilities within the radio access network (RAN) in close proximity to mobile users, and offer context-aware services with the help of RAN information. Therefore, MEC enables a wide variety of applications, where the real-time response is strictly required, e.g., driverless vehicles, augmented reality, robotics, and immerse media. Indeed, the paradigm shift from 4G to 5G could become a reality with the advent of new technological concepts. The successful realization of MEC in the 5G network is still in its infancy and demands for constant efforts from both academic and industry communities. In this survey, we first provide a holistic overview of MEC technology and its potential use cases and applications. Then, we outline up-to-date researches on the integration of MEC with the new technologies that will be deployed in 5G and beyond. We also summarize testbeds and experimental evaluations, and open source activities, for edge computing. We further summarize lessons learned from state-of-the-art research works as well as discuss challenges and potential future directions for MEC research

    AN EFFICIENT INTERFERENCE AVOIDANCE SCHEME FOR DEVICE-TODEVICE ENABLED FIFTH GENERATION NARROWBAND INTERNET OF THINGS NETWOKS’

    Get PDF
    Narrowband Internet of Things (NB-IoT) is a low-power wide-area (LPWA) technology built on long-term evolution (LTE) functionalities and standardized by the 3rd-Generation Partnership Project (3GPP). Due to its support for massive machine-type communication (mMTC) and different IoT use cases with rigorous standards in terms of connection, energy efficiency, reachability, reliability, and latency, NB-IoT has attracted the research community. However, as the capacity needs for various IoT use cases expand, the LTE evolved packet core (EPC) system's numerous functionalities may become overburdened and suboptimal. Several research efforts are currently in progress to address these challenges. As a result, an overview of these efforts with a specific focus on the optimized architecture of the LTE EPC functionalities, the 5G architectural design for NB-IoT integration, the enabling technologies necessary for 5G NB-IoT, 5G new radio (NR) coexistence with NB-IoT, and feasible architectural deployment schemes of NB-IoT with cellular networks is discussed. This thesis also presents cloud-assisted relay with backscatter communication as part of a detailed study of the technical performance attributes and channel communication characteristics from the physical (PHY) and medium access control (MAC) layers of the NB-IoT, with a focus on 5G. The numerous drawbacks that come with simulating these systems are explored. The enabling market for NB-IoT, the benefits for a few use cases, and the potential critical challenges associated with their deployment are all highlighted. Fortunately, the cyclic prefix orthogonal frequency division multiplexing (CPOFDM) based waveform by 3GPP NR for improved mobile broadband (eMBB) services does not prohibit the use of other waveforms in other services, such as the NB-IoT service for mMTC. As a result, the coexistence of 5G NR and NB-IoT must be manageably orthogonal (or quasi-orthogonal) to minimize mutual interference that limits the form of freedom in the waveform's overall design. As a result, 5G coexistence with NB-IoT will introduce a new interference challenge, distinct from that of the legacy network, even though the NR's coexistence with NB-IoT is believed to improve network capacity and expand the coverage of the user data rate, as well as improves robust communication through frequency reuse. Interference challenges may make channel estimation difficult for NB-IoT devices, limiting the user performance and spectral efficiency. Various existing interference mitigation solutions either add to the network's overhead, computational complexity and delay or are hampered by low data rate and coverage. These algorithms are unsuitable for an NB-IoT network owing to the low-complexity nature. As a result, a D2D communication based interference-control technique becomes an effective strategy for addressing this problem. This thesis used D2D communication to decrease the network bottleneck in dense 5G NBIoT networks prone to interference. For D2D-enabled 5G NB-IoT systems, the thesis presents an interference-avoidance resource allocation that considers the less favourable cell edge NUEs. To simplify the algorithm's computing complexity and reduce interference power, the system divides the optimization problem into three sub-problems. First, in an orthogonal deployment technique using channel state information (CSI), the channel gain factor is leveraged by selecting a probable reuse channel with higher QoS control. Second, a bisection search approach is used to find the best power control that maximizes the network sum rate, and third, the Hungarian algorithm is used to build a maximum bipartite matching strategy to choose the optimal pairing pattern between the sets of NUEs and the D2D pairs. The proposed approach improves the D2D sum rate and overall network SINR of the 5G NB-IoT system, according to the numerical data. The maximum power constraint of the D2D pair, D2D's location, Pico-base station (PBS) cell radius, number of potential reuse channels, and cluster distance impact the D2D pair's performance. The simulation results achieve 28.35%, 31.33%, and 39% SINR performance higher than the ARSAD, DCORA, and RRA algorithms when the number of NUEs is twice the number of D2D pairs, and 2.52%, 14.80%, and 39.89% SINR performance higher than the ARSAD, RRA, and DCORA when the number of NUEs and D2D pairs are equal. As a result, a D2D sum rate increase of 9.23%, 11.26%, and 13.92% higher than the ARSAD, DCORA, and RRA when the NUE’s number is twice the number of D2D pairs, and a D2D’s sum rate increase of 1.18%, 4.64% and 15.93% higher than the ARSAD, RRA and DCORA respectively, with an equal number of NUEs and D2D pairs is achieved. The results demonstrate the efficacy of the proposed scheme. The thesis also addressed the problem where the cell-edge NUE's QoS is critical to challenges such as long-distance transmission, delays, low bandwidth utilization, and high system overhead that affect 5G NB-IoT network performance. In this case, most cell-edge NUEs boost their transmit power to maximize network throughput. Integrating cooperating D2D relaying technique into 5G NB-IoT heterogeneous network (HetNet) uplink spectrum sharing increases the system's spectral efficiency and interference power, further degrading the network. Using a max-max SINR (Max-SINR) approach, this thesis proposed an interference-aware D2D relaying strategy for 5G NB-IoT QoS improvement for a cell-edge NUE to achieve optimum system performance. The Lagrangian-dual technique is used to optimize the transmit power of the cell-edge NUE to the relay based on the average interference power constraint, while the relay to the NB-IoT base station (NBS) employs a fixed transmit power. To choose an optimal D2D relay node, the channel-to-interference plus noise ratio (CINR) of all available D2D relays is used to maximize the minimum cell-edge NUE's data rate while ensuring the cellular NUEs' QoS requirements are satisfied. Best harmonic mean, best-worst, half-duplex relay selection, and a D2D communication scheme were among the other relaying selection strategies studied. The simulation results reveal that the Max-SINR selection scheme outperforms all other selection schemes due to the high channel gain between the two communication devices except for the D2D communication scheme. The proposed algorithm achieves 21.27% SINR performance, which is nearly identical to the half-duplex scheme, but outperforms the best-worst and harmonic selection techniques by 81.27% and 40.29%, respectively. As a result, as the number of D2D relays increases, the capacity increases by 14.10% and 47.19%, respectively, over harmonic and half-duplex techniques. Finally, the thesis presents future research works on interference control in addition with the open research directions on PHY and MAC properties and a SWOT (Strengths, Weaknesses, Opportunities, and Threats) analysis presented in Chapter 2 to encourage further study on 5G NB-IoT
    • …
    corecore