73 research outputs found

    Transceiver design and multi-hop D2D for UAV IoT coverage in disasters

    Get PDF
    When natural disasters strike, the coverage for Internet of Things (IoT) may be severely destroyed, due to the damaged communications infrastructure. Unmanned aerial vehicles (UAVs) can be exploited as flying base stations to provide emergency coverage for IoT, due to its mobility and flexibility. In this paper, we propose multi-antenna transceiver design and multi-hop device-to-device (D2D) communication to guarantee the reliable transmission and extend the UAV coverage for IoT in disasters. Firstly, multi-hop D2D links are established to extend the coverage of UAV emergency networks due to the constrained transmit power of the UAV. In particular, a shortest-path-routing algorithm is proposed to establish the D2D links rapidly with minimum nodes. The closed-form solutions for the number of hops and the outage probability are derived for the uplink and downlink. Secondly, the transceiver designs for the UAV uplink and downlink are studied to optimize the performance of UAV transmission. Due to the non-convexity of the problem, they are first transformed into convex ones and then, low-complexity algorithms are proposed to solve them efficiently. Simulation results show the performance improvement in the throughput and outage probability by the proposed schemes for UAV wireless coverage of IoT in disasters

    Edge Intelligence : Empowering Intelligence to the Edge of Network

    Get PDF
    Edge intelligence refers to a set of connected systems and devices for data collection, caching, processing, and analysis proximity to where data are captured based on artificial intelligence. Edge intelligence aims at enhancing data processing and protects the privacy and security of the data and users. Although recently emerged, spanning the period from 2011 to now, this field of research has shown explosive growth over the past five years. In this article, we present a thorough and comprehensive survey of the literature surrounding edge intelligence. We first identify four fundamental components of edge intelligence, i.e., edge caching, edge training, edge inference, and edge offloading based on theoretical and practical results pertaining to proposed and deployed systems. We then aim for a systematic classification of the state of the solutions by examining research results and observations for each of the four components and present a taxonomy that includes practical problems, adopted techniques, and application goals. For each category, we elaborate, compare, and analyze the literature from the perspectives of adopted techniques, objectives, performance, advantages and drawbacks, and so on. This article provides a comprehensive survey of edge intelligence and its application areas. In addition, we summarize the development of the emerging research fields and the current state of the art and discuss the important open issues and possible theoretical and technical directions.Peer reviewe

    Edge Intelligence : Empowering Intelligence to the Edge of Network

    Get PDF
    Edge intelligence refers to a set of connected systems and devices for data collection, caching, processing, and analysis proximity to where data are captured based on artificial intelligence. Edge intelligence aims at enhancing data processing and protects the privacy and security of the data and users. Although recently emerged, spanning the period from 2011 to now, this field of research has shown explosive growth over the past five years. In this article, we present a thorough and comprehensive survey of the literature surrounding edge intelligence. We first identify four fundamental components of edge intelligence, i.e., edge caching, edge training, edge inference, and edge offloading based on theoretical and practical results pertaining to proposed and deployed systems. We then aim for a systematic classification of the state of the solutions by examining research results and observations for each of the four components and present a taxonomy that includes practical problems, adopted techniques, and application goals. For each category, we elaborate, compare, and analyze the literature from the perspectives of adopted techniques, objectives, performance, advantages and drawbacks, and so on. This article provides a comprehensive survey of edge intelligence and its application areas. In addition, we summarize the development of the emerging research fields and the current state of the art and discuss the important open issues and possible theoretical and technical directions.Peer reviewe

    Design and Analysis of Beamforming in mmWave Networks

    Get PDF
    To support increasing data-intensive wireless applications, millimeter-wave (mmWave) communication emerges as the most promising wireless technology that offers high data rate connections by exploiting a large swath of spectrum. Beamforming (BF) that focuses the radio frequency power in a narrow direction, is adopted in mmWave communication to overcome the hostile path loss. However, the distinct high directionality feature caused by BF poses new challenges: 1) Beam alignment (BA) latency which is a processing delay that both the transmitter and the receiver align their beams to establish a reliable link. Existing BA methods incur significant BA latency on the order of seconds for a large number of beams; 2) Medium access control (MAC) degradation. To coordinate the BF training for multiple users, 802.11ad standard specifies a new MAC protocol in which all the users contend for BF training resources in a distributed manner. Due to the “deafness” problem caused by directional transmission, i.e., a user may not sense the transmission of other users, severe collisions occur in high user density scenarios, which significantly degrades the MAC performance; and 3) Backhaul congestion. All the base stations (BSs) in mmWave dense networks are connected to backbone network via backhaul links, in order to access remote content servers. Although BF technology can increase the data rate of the fronthaul links between users and the BS, the congested backhaul link becomes a new bottleneck, since deploying unconstrained wired backhaul links in mmWave dense networks is infeasible due to high costs. In this dissertation, we address each challenge respectively by 1) proposing an efficient BA algorithm; 2) evaluating and enhancing the 802.11ad MAC performance; and 3) designing an effective backhaul alleviation scheme. Firstly, we propose an efficient BA algorithm to reduce processing latency. The existing BA methods search the entire beam space to identify the optimal transmit-receive beam pair, which leads to significant latency. Thus, an efficient BA algorithm without search- ing the entire beam space is desired. Accordingly, a learning-based BA algorithm, namely hierarchical BA (HBA) algorithm is proposed which takes advantage of the correlation structure among beams such that the information from nearby beams is extracted to iden- tify the optimal beam, instead of searching the entire beam space. Furthermore, the prior knowledge on the channel fluctuation is incorporated in the proposed algorithm to further accelerate the BA process. Theoretical analysis indicates that the proposed algorithm can effectively identify the optimal beam pair with low latency. Secondly, we analyze and enhance the performance of BF training MAC (BFT-MAC) in 802.11ad. Existing analytical models for traditional omni-directional systems are un- suitable for BFT-MAC due to the distinct directional transmission feature in mmWave networks. Therefore, a thorough theoretical framework on BFT-MAC is necessary and significant. To this end, we develop a simple yet accurate analytical model to evaluate the performance of BFT-MAC. Based on our analytical model, we derive the closed-form expressions of average successful BF training probability, the normalized throughput, and the BF training latency. Asymptotic analysis indicates that the maximum normalized throughput of BFT-MAC is barely 1/e. Then, we propose an enhancement scheme which adaptively adjusts MAC parameters in tune with user density. The proposed scheme can effectively improve MAC performance in high user density scenarios. Thirdly, to alleviate backhaul burden in mmWave dense networks, edge caching that proactively caches popular contents at the edge of mmWave networks, is employed. Since the cache resource of an individual BS can only store limited contents, this significantly throttles the caching performance. We propose a cooperative edge caching policy, namely device-to-device assisted cooperative edge caching (DCEC), to enlarge cached contents by jointly utilizing cache resources of adjacent users and BSs in proximity. In addition, the proposed caching policy brings an extra advantage that the high directional transmission in mmWave communications can naturally tackle the interference issue in the cooperative caching policy. We theoretically analyze the performance of DCEC scheme taking the network density, the practical directional antenna model and the stochastic information of network topology into consideration. Theoretical results demonstrate that the proposed policy can achieve higher performance in offloading the backhaul traffic and reducing the content retrieval delay, compared with the benchmark policy. The research outcomes from the dissertation can provide insightful lights on under- standing the fundamental performance of the mmWave networks from the perspectives of BA, MAC, and backhaul. The schemes developed in the dissertation should offer practical and efficient solutions to build and optimize the mmWave networks

    Cognitive networking for next generation of cellular communication systems

    Get PDF
    This thesis presents a comprehensive study of cognitive networking for cellular networks with contributions that enable them to be more dynamic, agile, and efficient. To achieve this, machine learning (ML) algorithms, a subset of artificial intelligence, are employed to bring such cognition to cellular networks. More specifically, three major branches of ML, namely supervised, unsupervised, and reinforcement learning (RL), are utilised for various purposes: unsupervised learning is used for data clustering, while supervised learning is employed for predictions on future behaviours of networks/users. RL, on the other hand, is utilised for optimisation purposes due to its inherent characteristics of adaptability and requiring minimal knowledge of the environment. Energy optimisation, capacity enhancement, and spectrum access are identified as primary design challenges for cellular networks given that they are envisioned to play crucial roles for 5G and beyond due to the increased demand in the number of connected devices as well as data rates. Each design challenge and its corresponding proposed solution are discussed thoroughly in separate chapters. Regarding energy optimisation, a user-side energy consumption is investigated by considering Internet of things (IoT) networks. An RL based intelligent model, which jointly optimises the wireless connection type and data processing entity, is proposed. In particular, a Q-learning algorithm is developed, through which the energy consumption of an IoT device is minimised while keeping the requirement of the applications--in terms of response time and security--satisfied. The proposed methodology manages to result in 0% normalised joint cost--where all the considered metrics are combined--while the benchmarks performed 54.84% on average. Next, the energy consumption of radio access networks (RANs) is targeted, and a traffic-aware cell switching algorithm is designed to reduce the energy consumption of a RAN without compromising on the user quality-of-service (QoS). The proposed technique employs a SARSA algorithm with value function approximation, since the conventional RL methods struggle with solving problems with huge state spaces. The results reveal that up to 52% gain on the total energy consumption is achieved with the proposed technique, and the gain is observed to reduce when the scenario becomes more realistic. On the other hand, capacity enhancement is studied from two different perspectives, namely mobility management and unmanned aerial vehicle (UAV) assistance. Towards that end, a predictive handover (HO) mechanism is designed for mobility management in cellular networks by identifying two major issues of Markov chains based HO predictions. First, revisits--which are defined as a situation whereby a user visits the same cell more than once within the same day--are diagnosed as causing similar transition probabilities, which in turn increases the likelihood of making incorrect predictions. This problem is addressed with a structural change; i.e., rather than storing 2-D transition matrix, it is proposed to store 3-D one that also includes HO orders. The obtained results show that 3-D transition matrix is capable of reducing the HO signalling cost by up to 25.37%, which is observed to drop with increasing randomness level in the data set. Second, making a HO prediction with insufficient criteria is identified as another issue with the conventional Markov chains based predictors. Thus, a prediction confidence level is derived, such that there should be a lower bound to perform HO predictions, which are not always advantageous owing to the HO signalling cost incurred from incorrect predictions. The outcomes of the simulations confirm that the derived confidence level mechanism helps in improving the prediction accuracy by up to 8.23%. Furthermore, still considering capacity enhancement, a UAV assisted cellular networking is considered, and an unsupervised learning-based UAV positioning algorithm is presented. A comprehensive analysis is conducted on the impacts of the overlapping footprints of multiple UAVs, which are controlled by their altitudes. The developed k-means clustering based UAV positioning approach is shown to reduce the number of users in outage by up to 80.47% when compared to the benchmark symmetric deployment. Lastly, a QoS-aware dynamic spectrum access approach is developed in order to tackle challenges related to spectrum access, wherein all the aforementioned types of ML methods are employed. More specifically, by leveraging future traffic load predictions of radio access technologies (RATs) and Q-learning algorithm, a novel proactive spectrum sensing technique is introduced. As such, two different sensing strategies are developed; the first one focuses solely on sensing latency reduction, while the second one jointly optimises sensing latency and user requirements. In particular, the proposed Q-learning algorithm takes the future load predictions of the RATs and the requirements of secondary users--in terms of mobility and bandwidth--as inputs and directs the users to the spectrum of the optimum RAT to perform sensing. The strategy to be employed can be selected based on the needs of the applications, such that if the latency is the only concern, the first strategy should be selected due to the fact that the second strategy is computationally more demanding. However, by employing the second strategy, sensing latency is reduced while satisfying other user requirements. The simulation results demonstrate that, compared to random sensing, the first strategy decays the sensing latency by 85.25%, while the second strategy enhances the full-satisfaction rate, where both mobility and bandwidth requirements of the user are simultaneously satisfied, by 95.7%. Therefore, as it can be observed, three key design challenges of the next generation of cellular networks are identified and addressed via the concept of cognitive networking, providing a utilitarian tool for mobile network operators to plug into their systems. The proposed solutions can be generalised to various network scenarios owing to the sophisticated ML implementations, which renders the solutions both practical and sustainable

    6G Enabled Smart Infrastructure for Sustainable Society: Opportunities, Challenges, and Research Roadmap

    Get PDF
    The 5G wireless communication network is currently faced with the challenge of limited data speed exacerbated by the proliferation of billions of data-intensive applications. To address this problem, researchers are developing cutting-edge technologies for the envisioned 6G wireless communication standards to satisfy the escalating wireless services demands. Though some of the candidate technologies in the 5G standards will apply to 6G wireless networks, key disruptive technologies that will guarantee the desired quality of physical experience to achieve ubiquitous wireless connectivity are expected in 6G. This article first provides a foundational background on the evolution of different wireless communication standards to have a proper insight into the vision and requirements of 6G. Second, we provide a panoramic view of the enabling technologies proposed to facilitate 6G and introduce emerging 6G applications such as multi-sensory–extended reality, digital replica, and more. Next, the technology-driven challenges, social, psychological, health and commercialization issues posed to actualizing 6G, and the probable solutions to tackle these challenges are discussed extensively. Additionally, we present new use cases of the 6G technology in agriculture, education, media and entertainment, logistics and transportation, and tourism. Furthermore, we discuss the multi-faceted communication capabilities of 6G that will contribute significantly to global sustainability and how 6G will bring about a dramatic change in the business arena. Finally, we highlight the research trends, open research issues, and key take-away lessons for future research exploration in 6G wireless communicatio

    Solutions for large scale, efficient, and secure Internet of Things

    Get PDF
    The design of a general architecture for the Internet of Things (IoT) is a complex task, due to the heterogeneity of devices, communication technologies, and applications that are part of such systems. Therefore, there are significant opportunities to improve the state of the art, whether to better the performance of the system, or to solve actual issues in current systems. This thesis focuses, in particular, on three aspects of the IoT. First, issues of cyber-physical systems are analysed. In these systems, IoT technologies are widely used to monitor, control, and act on physical entities. One of the most important issue in these scenarios are related to the communication layer, which must be characterized by high reliability, low latency, and high energy efficiency. Some solutions for the channel access scheme of such systems are proposed, each tailored to different specific scenarios. These solutions, which exploit the capabilities of state of the art radio transceivers, prove effective in improving the performance of the considered systems. Positioning services for cyber-physical systems are also investigated, in order to improve the accuracy of such services. Next, the focus moves to network and service optimization for traffic intensive applications, such as video streaming. This type of traffic is common amongst non-constrained devices, like smartphones and augmented/virtual reality headsets, which form an integral part of the IoT ecosystem. The proposed solutions are able to increase the video Quality of Experience while wasting less bandwidth than state of the art strategies. Finally, the security of IoT systems is investigated. While often overlooked, this aspect is fundamental to enable the ubiquitous deployment of IoT. Therefore, security issues of commonly used IoT protocols are presented, together with a proposal for an authentication mechanism based on physical channel features. This authentication strategy proved to be effective as a standalone mechanism or as an additional security layer to improve the security level of legacy systems
    • …
    corecore