44 research outputs found

    Relaying in the Internet of Things (IoT): A Survey

    Get PDF
    The deployment of relays between Internet of Things (IoT) end devices and gateways can improve link quality. In cellular-based IoT, relays have the potential to reduce base station overload. The energy expended in single-hop long-range communication can be reduced if relays listen to transmissions of end devices and forward these observations to gateways. However, incorporating relays into IoT networks faces some challenges. IoT end devices are designed primarily for uplink communication of small-sized observations toward the network; hence, opportunistically using end devices as relays needs a redesign of both the medium access control (MAC) layer protocol of such end devices and possible addition of new communication interfaces. Additionally, the wake-up time of IoT end devices needs to be synchronized with that of the relays. For cellular-based IoT, the possibility of using infrastructure relays exists, and noncellular IoT networks can leverage the presence of mobile devices for relaying, for example, in remote healthcare. However, the latter presents problems of incentivizing relay participation and managing the mobility of relays. Furthermore, although relays can increase the lifetime of IoT networks, deploying relays implies the need for additional batteries to power them. This can erode the energy efficiency gain that relays offer. Therefore, designing relay-assisted IoT networks that provide acceptable trade-offs is key, and this goes beyond adding an extra transmit RF chain to a relay-enabled IoT end device. There has been increasing research interest in IoT relaying, as demonstrated in the available literature. Works that consider these issues are surveyed in this paper to provide insight into the state of the art, provide design insights for network designers and motivate future research directions

    Enabling AI in Future Wireless Networks: A Data Life Cycle Perspective

    Full text link
    Recent years have seen rapid deployment of mobile computing and Internet of Things (IoT) networks, which can be mostly attributed to the increasing communication and sensing capabilities of wireless systems. Big data analysis, pervasive computing, and eventually artificial intelligence (AI) are envisaged to be deployed on top of the IoT and create a new world featured by data-driven AI. In this context, a novel paradigm of merging AI and wireless communications, called Wireless AI that pushes AI frontiers to the network edge, is widely regarded as a key enabler for future intelligent network evolution. To this end, we present a comprehensive survey of the latest studies in wireless AI from the data-driven perspective. Specifically, we first propose a novel Wireless AI architecture that covers five key data-driven AI themes in wireless networks, including Sensing AI, Network Device AI, Access AI, User Device AI and Data-provenance AI. Then, for each data-driven AI theme, we present an overview on the use of AI approaches to solve the emerging data-related problems and show how AI can empower wireless network functionalities. Particularly, compared to the other related survey papers, we provide an in-depth discussion on the Wireless AI applications in various data-driven domains wherein AI proves extremely useful for wireless network design and optimization. Finally, research challenges and future visions are also discussed to spur further research in this promising area.Comment: Accepted at the IEEE Communications Surveys & Tutorials, 42 page

    Intelligent and Efficient Ultra-Dense Heterogeneous Networks for 5G and Beyond

    Get PDF
    Ultra-dense heterogeneous network (HetNet), in which densified small cells overlaying the conventional macro-cells, is a promising technique for the fifth-generation (5G) mobile network. The dense and multi-tier network architecture is able to support the extensive data traffic and diverse quality of service (QoS) but meanwhile arises several challenges especially on the interference coordination and resource management. In this thesis, three novel network schemes are proposed to achieve intelligent and efficient operation based on the deep learning-enabled network awareness. Both optimization and deep learning methods are developed to achieve intelligent and efficient resource allocation in these proposed network schemes. To improve the cost and energy efficiency of ultra-dense HetNets, a hotspot prediction based virtual small cell (VSC) network is proposed. A VSC is formed only when the traffic volume and user density are extremely high. We leverage the feature extraction capabilities of deep learning techniques and exploit a long-short term memory (LSTM) neural network to predict potential hotspots and form VSC. Large-scale antenna array enabled hybrid beamforming is also adaptively adjusted for highly directional transmission to cover these VSCs. Within each VSC, one user equipment (UE) is selected as a cell head (CH), which collects the intra-cell traffic using the unlicensed band and relays the aggregated traffic to the macro-cell base station (MBS) in the licensed band. The inter-cell interference can thus be reduced, and the spectrum efficiency can be improved. Numerical results show that proposed VSCs can reduce 55%55\% power consumption in comparison with traditional small cells. In addition to the smart VSCs deployment, a novel multi-dimensional intelligent multiple access (MD-IMA) scheme is also proposed to achieve stringent and diverse QoS of emerging 5G applications with disparate resource constraints. Multiple access (MA) schemes in multi-dimensional resources are adaptively scheduled to accommodate dynamic QoS requirements and network states. The MD-IMA learns the integrated-quality-of-system-experience (I-QoSE) by monitoring and predicting QoS through the LSTM neural network. The resource allocation in the MD-IMA scheme is formulated as an optimization problem to maximize the I-QoSE as well as minimize the non-orthogonality (NO) in view of implementation constraints. In order to solve this problem, both model-based optimization algorithms and model-free deep reinforcement learning (DRL) approaches are utilized. Simulation results demonstrate that the achievable I-QoSE gain of MD-IMA over traditional MA is 15%15\% - 18%18\%. In the final part of the thesis, a Software-Defined Networking (SDN) enabled 5G-vehicle ad hoc networks (VANET) is designed to support the growing vehicle-generated data traffic. In this integrated architecture, to reduce the signaling overhead, vehicles are clustered under the coordination of SDN and one vehicle in each cluster is selected as a gateway to aggregate intra-cluster traffic. To ensure the capacity of the trunk-link between the gateway and macro base station, a Non-orthogonal Multiplexed Modulation (NOMM) scheme is proposed to split aggregated data stream into multi-layers and use sparse spreading code to partially superpose the modulated symbols on several resource blocks. The simulation results show that the energy efficiency performance of proposed NOMM is around 1.5-2 times than that of the typical orthogonal transmission scheme

    Edge Intelligence : Empowering Intelligence to the Edge of Network

    Get PDF
    Edge intelligence refers to a set of connected systems and devices for data collection, caching, processing, and analysis proximity to where data are captured based on artificial intelligence. Edge intelligence aims at enhancing data processing and protects the privacy and security of the data and users. Although recently emerged, spanning the period from 2011 to now, this field of research has shown explosive growth over the past five years. In this article, we present a thorough and comprehensive survey of the literature surrounding edge intelligence. We first identify four fundamental components of edge intelligence, i.e., edge caching, edge training, edge inference, and edge offloading based on theoretical and practical results pertaining to proposed and deployed systems. We then aim for a systematic classification of the state of the solutions by examining research results and observations for each of the four components and present a taxonomy that includes practical problems, adopted techniques, and application goals. For each category, we elaborate, compare, and analyze the literature from the perspectives of adopted techniques, objectives, performance, advantages and drawbacks, and so on. This article provides a comprehensive survey of edge intelligence and its application areas. In addition, we summarize the development of the emerging research fields and the current state of the art and discuss the important open issues and possible theoretical and technical directions.Peer reviewe
    corecore