2,203 research outputs found

    Applications of Deep Reinforcement Learning in Communications and Networking: A Survey

    Full text link
    This paper presents a comprehensive literature review on applications of deep reinforcement learning in communications and networking. Modern networks, e.g., Internet of Things (IoT) and Unmanned Aerial Vehicle (UAV) networks, become more decentralized and autonomous. In such networks, network entities need to make decisions locally to maximize the network performance under uncertainty of network environment. Reinforcement learning has been efficiently used to enable the network entities to obtain the optimal policy including, e.g., decisions or actions, given their states when the state and action spaces are small. However, in complex and large-scale networks, the state and action spaces are usually large, and the reinforcement learning may not be able to find the optimal policy in reasonable time. Therefore, deep reinforcement learning, a combination of reinforcement learning with deep learning, has been developed to overcome the shortcomings. In this survey, we first give a tutorial of deep reinforcement learning from fundamental concepts to advanced models. Then, we review deep reinforcement learning approaches proposed to address emerging issues in communications and networking. The issues include dynamic network access, data rate control, wireless caching, data offloading, network security, and connectivity preservation which are all important to next generation networks such as 5G and beyond. Furthermore, we present applications of deep reinforcement learning for traffic routing, resource sharing, and data collection. Finally, we highlight important challenges, open issues, and future research directions of applying deep reinforcement learning.Comment: 37 pages, 13 figures, 6 tables, 174 reference paper

    EMM: Energy-Aware Mobility Management for Mobile Edge Computing in Ultra Dense Networks

    Full text link
    Merging mobile edge computing (MEC) functionality with the dense deployment of base stations (BSs) provides enormous benefits such as a real proximity, low latency access to computing resources. However, the envisioned integration creates many new challenges, among which mobility management (MM) is a critical one. Simply applying existing radio access oriented MM schemes leads to poor performance mainly due to the co-provisioning of radio access and computing services of the MEC-enabled BSs. In this paper, we develop a novel user-centric energy-aware mobility management (EMM) scheme, in order to optimize the delay due to both radio access and computation, under the long-term energy consumption constraint of the user. Based on Lyapunov optimization and multi-armed bandit theories, EMM works in an online fashion without future system state information, and effectively handles the imperfect system state information. Theoretical analysis explicitly takes radio handover and computation migration cost into consideration and proves a bounded deviation on both the delay performance and energy consumption compared to the oracle solution with exact and complete future system information. The proposed algorithm also effectively handles the scenario in which candidate BSs randomly switch on/off during the offloading process of a task. Simulations show that the proposed algorithms can achieve close-to-optimal delay performance while satisfying the user energy consumption constraint.Comment: 14 pages, 6 figures, an extended version of the paper submitted to IEEE JSA

    Mobile Edge Computing and Artificial Intelligence: A Mutually-Beneficial Relationship

    Full text link
    This article provides an overview of mobile edge computing (MEC) and artificial intelligence (AI) and discusses the mutually-beneficial relationship between them. AI provides revolutionary solutions in nearly every important aspect of the MEC offloading process, such as resource management and scheduling. On the other hand, MEC servers are utilized to avail a distributed and parallelized learning framework, namely mobile edge learning.Comment: 6 pages, 2 figures, IEEE ComSoc Technical Committees Newslette

    Integrating PHY Security Into NDN-IoT Networks By Exploiting MEC: Authentication Efficiency, Robustness, and Accuracy Enhancement

    Full text link
    Recent literature has demonstrated the improved data discovery and delivery efficiency gained through applying named data networking (NDN) to a variety of information-centric Internet of things (IoT) applications. However, from a data security perspective, the development of NDN-IoT raises several new authentication challenges. In particular, NDN-IoT authentication may require per-packet-level signatures, thus imposing intolerably high computational and time costs on the resource-poor IoT end devices. This paper proposes an effective solution by seamlessly integrating the lightweight and unforgeable physical-layer identity (PHY-ID) into the existing NDN signature scheme for the mobile edge computing (MEC)-enabled NDN-IoT networks. The PHY-ID generation exploits the inherent signal-level device-specific radio-frequency imperfections of IoT devices, including the in-phase/quadrature-phase imbalance, and thereby avoids adding any implementation complexity to the constrained IoT devices. We derive the offline maximum entropy-based quantization rule and propose an online two-step authentication scheme to improve the accuracy of the authentication decision making. Consequently, a cooperative MEC device can securely execute the costly signing task on behalf of the authenticated IoT device in an optimal manner. The evaluation results demonstrate 1) elevated authentication time efficiency, 2) robustness to several impersonation attacks including the replay attack and the computation-based spoofing attack, and 3) increased differentiation rate and correct authentication probability by applying our integration design in MEC-enabled NDN-IoT networks

    Exploring Computation-Communication Tradeoffs in Camera Systems

    Full text link
    Cameras are the defacto sensor. The growing demand for real-time and low-power computer vision, coupled with trends towards high-efficiency heterogeneous systems, has given rise to a wide range of image processing acceleration techniques at the camera node and in the cloud. In this paper, we characterize two novel camera systems that use acceleration techniques to push the extremes of energy and performance scaling, and explore the computation-communication tradeoffs in their design. The first case study targets a camera system designed to detect and authenticate individual faces, running solely on energy harvested from RFID readers. We design a multi-accelerator SoC design operating in the sub-mW range, and evaluate it with real-world workloads to show performance and energy efficiency improvements over a general purpose microprocessor. The second camera system supports a 16-camera rig processing over 32 Gb/s of data to produce real-time 3D-360 degree virtual reality video. We design a multi-FPGA processing pipeline that outperforms CPU and GPU configurations by up to 10x in computation time, producing panoramic stereo video directly from the camera rig at 30 frames per second. We find that an early data reduction step, either before complex processing or offloading, is the most critical optimization for in-camera systems

    Big Data Analytics, Machine Learning and Artificial Intelligence in Next-Generation Wireless Networks

    Full text link
    The next-generation wireless networks are evolving into very complex systems because of the very diversified service requirements, heterogeneity in applications, devices, and networks. The mobile network operators (MNOs) need to make the best use of the available resources, for example, power, spectrum, as well as infrastructures. Traditional networking approaches, i.e., reactive, centrally-managed, one-size-fits-all approaches and conventional data analysis tools that have limited capability (space and time) are not competent anymore and cannot satisfy and serve that future complex networks in terms of operation and optimization in a cost-effective way. A novel paradigm of proactive, self-aware, self- adaptive and predictive networking is much needed. The MNOs have access to large amounts of data, especially from the network and the subscribers. Systematic exploitation of the big data greatly helps in making the network smart, intelligent and facilitates cost-effective operation and optimization. In view of this, we consider a data-driven next-generation wireless network model, where the MNOs employ advanced data analytics for their networks. We discuss the data sources and strong drivers for the adoption of the data analytics and the role of machine learning, artificial intelligence in making the network intelligent in terms of being self-aware, self-adaptive, proactive and prescriptive. A set of network design and optimization schemes are presented with respect to data analytics. The paper is concluded with a discussion of challenges and benefits of adopting big data analytics and artificial intelligence in the next-generation communication system

    Application Management in Fog Computing Environments: A Taxonomy, Review and Future Directions

    Full text link
    The Internet of Things (IoT) paradigm is being rapidly adopted for the creation of smart environments in various domains. The IoT-enabled Cyber-Physical Systems (CPSs) associated with smart city, healthcare, Industry 4.0 and Agtech handle a huge volume of data and require data processing services from different types of applications in real-time. The Cloud-centric execution of IoT applications barely meets such requirements as the Cloud datacentres reside at a multi-hop distance from the IoT devices. \textit{Fog computing}, an extension of Cloud at the edge network, can execute these applications closer to data sources. Thus, Fog computing can improve application service delivery time and resist network congestion. However, the Fog nodes are highly distributed, heterogeneous and most of them are constrained in resources and spatial sharing. Therefore, efficient management of applications is necessary to fully exploit the capabilities of Fog nodes. In this work, we investigate the existing application management strategies in Fog computing and review them in terms of architecture, placement and maintenance. Additionally, we propose a comprehensive taxonomy and highlight the research gaps in Fog-based application management. We also discuss a perspective model and provide future research directions for further improvement of application management in Fog computing

    Edge Intelligence: On-Demand Deep Learning Model Co-Inference with Device-Edge Synergy

    Full text link
    As the backbone technology of machine learning, deep neural networks (DNNs) have have quickly ascended to the spotlight. Running DNNs on resource-constrained mobile devices is, however, by no means trivial, since it incurs high performance and energy overhead. While offloading DNNs to the cloud for execution suffers unpredictable performance, due to the uncontrolled long wide-area network latency. To address these challenges, in this paper, we propose Edgent, a collaborative and on-demand DNN co-inference framework with device-edge synergy. Edgent pursues two design knobs: (1) DNN partitioning that adaptively partitions DNN computation between device and edge, in order to leverage hybrid computation resources in proximity for real-time DNN inference. (2) DNN right-sizing that accelerates DNN inference through early-exit at a proper intermediate DNN layer to further reduce the computation latency. The prototype implementation and extensive evaluations based on Raspberry Pi demonstrate Edgent's effectiveness in enabling on-demand low-latency edge intelligence.Comment: ACM SIGCOMM Workshop on Mobile Edge Communications, Budapest, Hungary, August 21-23, 2018. https://dl.acm.org/authorize?N66547

    Context-Aware Wireless Connectivity and Processing Unit Optimization for IoT Networks

    Full text link
    A novel approach is presented in this work for context-aware connectivity and processing optimization of Internet of things (IoT) networks. Different from the state-of-the-art approaches, the proposed approach simultaneously selects the best connectivity and processing unit (e.g., device, fog, and cloud) along with the percentage of data to be offloaded by jointly optimizing energy consumption, response-time, security, and monetary cost. The proposed scheme employs a reinforcement learning algorithm, and manages to achieve significant gains compared to deterministic solutions. In particular, the requirements of IoT devices in terms of response-time and security are taken as inputs along with the remaining battery level of the devices, and the developed algorithm returns an optimized policy. The results obtained show that only our method is able to meet the holistic multi-objective optimisation criteria, albeit, the benchmark approaches may achieve better results on a particular metric at the cost of failing to reach the other targets. Thus, the proposed approach is a device-centric and context-aware solution that accounts for the monetary and battery constraints

    When Social Sensing Meets Edge Computing: Vision and Challenges

    Full text link
    This paper overviews the state of the art, research challenges, and future opportunities in an emerging research direction: Social Sensing based Edge Computing (SSEC). Social sensing has emerged as a new sensing application paradigm where measurements about the physical world are collected from humans or from devices on their behalf. The advent of edge computing pushes the frontier of computation, service, and data along the cloud-to-things continuum. The merging of these two technical trends generates a set of new research challenges that need to be addressed. In this paper, we first define the new SSEC paradigm that is motivated by a few underlying technology trends. We then present a few representative real-world case studies of SSEC applications and several key research challenges that exist in those applications. Finally, we envision a few exciting research directions in future SSEC. We hope this paper will stimulate discussions of this emerging research direction in the community.Comment: This manuscript has been accepted to ICCCN 201
    • …
    corecore